Text detection from images using EasyOCR: Hands-on guide

# Changing the image path IMAGE_PATH = ‘Turkish_text.png’ # Same code here just changing the attribute from [‘en’] to [‘zh’] reader = easyocr.Reader([‘tr’]) result = reader.readtext(IMAGE_PATH,paragraph=”False”) result Output: [[[[89, 7], [717, 7], [717, 108], [89, 108]], ‘Most Common Texting Slang in Turkish’], [[[392, 234], [446, 234], [446, 260], [392, 260]], ‘test’], [[[353, 263], [488, 263], [488, 308], [353, 308]], ‘yazmak’], [[[394, 380], [446, 380], [446, 410], [394, 410]], ‘link’], [[[351, 409], [489, 409], [489, 453], [351, 453]], ‘bağlantı’], [[[373, 525], […]

Read more

UmlsBERT: Augmenting Contextual Embeddings with a Clinical Metathesaurus

UmlsBERT UmlsBERT: Clinical Domain Knowledge Augmentation of Contextual Embeddings Using the Unified Medical Language System Metathesaurus General info This is the code that was used of the paper : UmlsBERT: Augmenting Contextual Embeddings with a Clinical Metathesaurus (NAACL 2021). In this work, we introduced UmlsBERT, a contextual embedding model capable of integrating domain knowledge during pre-training. It was trained on biomedical corpora and uses the Unified Medical Language System (UMLS) clinical metathesaurus in two ways: We proposed a new multi-label […]

Read more

PyTorch implementation for Graph Contrastive Learning Automated

Graph Contrastive Learning Automated PyTorch implementation for Graph Contrastive Learning Automated . Yuning You, Tianlong Chen, Yang Shen, Zhangyang Wang In ICML 2021. Overview In this repository, we propose a principled framework named joint augmentation selection (JOAO), to automatically, adaptively and dynamically select augmentations during GraphCL training.Sanity check shows that the selection aligns with previous “best practices”, as shown in Figure 2. Dependencies Experiments Citation If you use this code for you research, please cite our paper. @article{you2021graph, title={Graph Contrastive […]

Read more

Neural Deformation Graphs for Globally-consistent Non-rigid Reconstruction

Neural Deformation Graphs Neural Deformation Graphs for Globally-consistent Non-rigid ReconstructionAljaž Božič, Pablo Palafox, Michael Zollhöfer, Justus Thies, Angela Dai, Matthias NießnerCVPR 2021 (Oral Presentation) This repository contains the code for the CVPR 2021 paper Neural Deformation Graphs, a novel approach for globally-consistent deformation tracking and 3D reconstruction of non-rigid objects. Specifically, we implicitly model a deformation graph via a deep neural network and empose per-frame viewpoint consistency as well as inter-frame graph and surface consistency constraints in a self-supervised fashion. […]

Read more

Self-Damaging Contrastive Learning with python

SDCLR The recent breakthrough achieved by contrastive learning accelerates the pace for deploying unsupervised training on real-world data applications. However, unlabeled data in reality is commonly imbalanced and shows a long-tail distribution, and it is unclear how robustly the latest contrastive learning methods could perform in the practical scenario. This paper proposes to explicitly tackle this challenge, via a principled framework called Self-Damaging Contrastive Learning (SDCLR), to automatically balance the representation learning without knowing the classes. Our main inspiration is […]

Read more

A novel attention-based architecture for vision-and-language navigation

Episodic Transformers (E.T.) Episodic Transformer (E.T.) is a novel attention-based architecture for vision-and-language navigation. E.T. is based on a multimodal transformer that encodes language inputs and the full episode history of visual observations and actions. This code reproduces the results obtained with E.T. on ALFRED benchmark. To learn more about the benchmark and the original code, please refer to ALFRED repository. Quickstart Clone repo: $ git clone https://github.com/alexpashevich/E.T..git ET $ export ET_ROOT=$(pwd)/ET $ export ET_LOGS=$ET_ROOT/logs $ export ET_DATA=$ET_ROOT/data $ export […]

Read more

Self-Supervised Learning for Sketch and Handwriting

Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting, CVPR 2021. Ayan Kumar Bhunia, Pinaki nath Chowdhury, Yongxin Yang, Timothy Hospedales, Tao Xiang, Yi-Zhe Song, “Vectorization and Rasterization: Self-Supervised Learning for Sketch and Handwriting”, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2021. Abstract Self-supervised learning has gained prominence due to its efficacy at learning powerful representations from unlabelled data that achieve excellent performance on many challenging downstream tasks. However, supervision-free pre-text tasks are challenging to design and usually […]

Read more

LiDAR-based Place Recognition using Spatiotemporal Higher-Order Pooling

Locus This repository is an open-source implementation of the ICRA 2021 paper: Locus: LiDAR-based Place Recognition using Spatiotemporal Higher-Order Pooling. More information: https://research.csiro.au/robotics/locus-pr/ Paper Pre-print: https://arxiv.org/abs/2011.14497 Method overview. Locus is a global descriptor for large-scale place recognition using sequential 3D LiDAR point clouds. It encodes topological relationships and temporal consistency of scene components to obtain a discriminative and view-point invariant scene representation. Usage Set up environment This project has been tested on Ubuntu 18.04 (with Open3D 0.11, tensorflow 1.8.0, pcl […]

Read more

Towards Part-Based Understanding of RGB-D Scans

part-based-scan-understanding Towards Part-Based Understanding of RGB-D Scans (CVPR 2021) We propose the task of part-based scene understanding of real-world 3D environments: from an RGB-D scan of a scene, we detect objects, and for each object predict its decomposition into geometric part masks, which composed together form the complete geometry of the observed object. Download Paper (.pdf) Demo samples Get started The core of this repository is a network, which takes as input preprocessed scan voxel crops and produces voxelized part […]

Read more

Deep Networks from the Principle of Rate Reduction

redunet_paper Deep Networks from the Principle of Rate ReductionThis repository is the official NumPy implementation of the paper Deep Networks from the Principle of Rate Reduction (2021) by Kwan Ho Ryan Chan* (UC Berkeley), Yaodong Yu* (UC Berkeley), Chong You* (UC Berkeley), Haozhi Qi (UC Berkeley), John Wright (Columbia), and Yi Ma (UC Berkeley). For PyTorch version of ReduNet, please visit https://github.com/ryanchankh/redunet. What is ReduNet? ReduNet is a deep neural network construcuted naturally by deriving the gradients of the Maximal […]

Read more
1 620 621 622 623 624 942