A collection of Blender tools Build With Python

A collection of Blender tools I’ve written for myself over the years. I use these daily so they should be bug-free, mostly. Feel free to take and use any parts of this project. gret can be typed with one hand in the search bar. Blender 2.92 or later required. Download the latest release. In Blender, go to Edit → Preferences → Add-ons → Install. Find and select the downloaded zip file, then click Install Add-on. Enable the add-on by clicking […]

Read more

A Mixed Precision library for JAX in python

Mixed precision training in JAX Mixed precision training [0] is a technique that mixes the use of full andhalf precision floating point numbers during training to reduce the memorybandwidth requirements and improve the computational efficiency of a givenmodel. This library implements support for mixed precision training in JAX by providingtwo key abstractions (mixed precision “policies” and loss scaling). Neuralnetwork libraries (such as Haiku) can integrate with jmp and provide“Automatic Mixed Precision (AMP)” support (automating or simplifying applyingpolicies to modules). All […]

Read more

Attention in Attention Network for Image Super-Resolution

A2N This repository is an PyTorch implementation of the paper “Attention in Attention Network for Image Super-Resolution” [arXiv] Visual results in the paper are availble at Google Drive or Baidu Netdisk (password: 7t74). Unofficial TensorFlow implementation: https://github.com/Anuj040/superres Test Dependecies: PyTorch==0.4.1 (Will be updated to support PyTorch>1.0 in the future) You can download the test sets from Google Drive. Put the test data in ../Data/benchmark/. python main.py –scale 4 –data_test Set5 –pre_train ./experiment/model/aan_x4.pt –chop –test_only If you use CPU, please add […]

Read more

Unsupervised Pre-training for Person Re-identification

LUPerson The repository is for our CVPR2021 paper Unsupervised Pre-training for Person Re-identification. LUPerson Dataset LUPerson is currently the largest unlabeled dataset for Person Re-identification, which is used for Unsupervised Pre-training. LUPerson consists of 4M images of over 200K identities and covers a much diverse range of capturing environments. Details can be found at ./LUP. Pre-trained Models Finetuned Results For MGN with ResNet50: Dataset mAP cmc1 path MSMT17 66.06/79.93 85.08/87.63 MSMT DukeMTMC 82.27/91.70 90.35/92.82 Duke Market1501 91.12/96.16 96.26/97.12 Market CUHK03-L […]

Read more

Bot that automatically answers giga unitel questions

Gigabot+ Bot que responde automáticamente as perguntas do giga unitel ATT: Não compativel para Windows 7 Para instalar esta ferramenta é muito fácil pip install requests python gb.py Inicio Treinar Jogar Só vai poder jogar caso o cliente estiver subscrito Antes de escolher a opção “jogar” escola a opção “treinar”, sai do programa e depois abre de novo Para usar em Android instale o script no termux By: Joa Roque GitHub https://github.com/joaroque/gigabot-plus    

Read more

Boosting Co-teaching with Compression Regularization for Label Noise

Nested-Co-teaching ([email protected]) Pytorch implementation of paper “Boosting Co-teaching with Compression Regularization for Label Noise” [PDF] If our project is helpful for your research, please consider citing : @inproceedings{chen2021boosting, title={Boosting Co-teaching with Compression Regularization for Label Noise}, author={Chen, Yingyi and Shen, Xi and Hu, Shell Xu and Suykens, Johan AK}, booktitle={CVPR Learning from Limited and Imperfect Data (L2ID) workshop}, year={2021} } Our model can be learnt in a single GPU GeForce GTX 1080Ti (12G), this code has been tested with Pytorch […]

Read more

Image data augmentation scheduler for albumentations transforms

albu_scheduler Scheduler for albumentations transforms based on PyTorch schedulers interface TransformMultiStepScheduler import albumentations as A from albu_scheduler import TransformMultiStepScheduler transform_1 = A.Compose([ A.RandomCrop(width=256, height=256), A.HorizontalFlip(p=0.5), A.RandomBrightnessContrast(p=0.2), ]) transform_2 = A.Compose([ A.RandomCrop(width=128, height=128), A.VerticalFlip(p=0.5), ]) scheduled_transform = TransformMultiStepScheduler(transforms=[transform_1, transform_2], milestones=[0, 10]) dataset = Dataset(transform=scheduled_transform) for epoch in range(100): train(…) validate(…) scheduled_transform.step() TransformSchedulerOnPlateau from albu_scheduler import TransformSchedulerOnPlateau scheduled_transform = TransformSchedulerOnPlateau(transforms=[transform_1, transform_2], mode=”max”, patience=5) dataset = Dataset(transform=scheduled_transform) for epoch in range(100): train(…) score = validate(…) scheduled_transform.step(score) git clone https://github.com/KiriLev/albu_scheduler cd albu_scheduler make install […]

Read more

Performing Sentiment Analysis Using Twitter Data!

Photo by Daddy Mohlala on Unsplash Data is water, purifying to make it edible is a role of Data Analyst – Kashish Rastogi We are going to clean the twitter text data and visualize data in this blog. Table Of Contents: Problem Statement Data Description Cleaning text with NLP Finding if the text has: with spacy Cleaning text with preprocessor library Analysis of the sentiment of data Data visualizing   I am taking the twitter data which is available here on […]

Read more

Training BERT Text Classifier on Tensor Processing Unit (TPU)

Training hugging face most famous model on TPU for social media Tunisian Arabizi sentiment analysis.   Introduction The Arabic speakers usually express themself in local dialect on social media, so Tunisians use Tunisian Arabizi which consists of Arabic written in form of Latin alphabets. The sentiment analysis relies on cultural knowledge and word sense with contextual information. We will be using both Arabizi dialect and sentimental analysis to solve the problem in this project. The competition is hosted on Zindi which […]

Read more

Dialogue in the Wild: Learning from a Deployed Role-Playing Game with Humans and Bots

Abstract Much of NLP research has focused on crowdsourced static datasets and the supervised learning paradigm of training once and then evaluating test performance. As argued in de Vries et al. (2020), crowdsourced data has the issues of lack of naturalness and relevance to real-world use cases, while the static dataset paradigm does not allow for a model to learn from its experiences of using language (Silver et al., 2013). In contrast, one might hope for machine learning systems that […]

Read more
1 542 543 544 545 546 938