A personal assistant chatbot capable to perform many tasks same as Google Assistant plus

PersonalAssistant It is an Personal Assistant, capable to perform many tasks with some unique features, that you haven’e seen yet…. Features / Tasks it can perform: Game (eg, Rock Paper Scissor with GUI) Search anything from wikipedia, google maps, etc Play video from YouTube Email Sender WhatsApp Message Sender COVID Tracker Weather Jokes News High Security (Face Unlock) Capture Photo Math Calculations Timer In-built search image display Smart Dictionary Search OS Info, Battery Info Window, Tab Operations Opening Websites File […]

Read more

LoRA: Low-Rank Adaptation of Large Language Models

LoRA This repo contains the implementation of LoRA in GPT-2 and steps to replicate the results in our recent paper LoRA: Low-Rank Adaptation of Large Language ModelsEdward J. Hu*, Yelong Shen*, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Weizhu ChenPaper: https://arxiv.org/abs/2106.09685 LoRA reduces the number of trainable parameters by learning pairs of rank-decompostion matrices and freezing the original weights. This vastly reduces the storage requirement for large language models adapted to specific tasks and enables efficient task-switching during deployment […]

Read more

Utility to extract Fantasy Grounds Unity Line-of-sight and lighting files from a Univeral VTT file exported

uvtt2fgu Utility to extract Fantasy Grounds Unity Line-of-sight and lighting files from a Univeral VTT file exported from Dungeondraft This program works with Fantasy Grounds Unity v4.1 or higher as that is the version where dynamic lighting effects were added. This was last used with Dungeondraft v1.0.1.3. Requirements uvtt2fgu.py requires a python3 installation with PIP. Usage Create your map in Dungeondraft Export the map in Universal VTT format You do not have to use the default “Best Quality” Grid Preset. […]

Read more

Pipeline for chemical image-to-text competition

BMS-Molecular-Translation Pipeline for chemical image-to-text competition. This is a pipeline for Bristol-Myers Squibb – Molecular Translation by Vadim Timakin and Maksim Zhdanov. We got bronze medals in this competition. Significant part of code was originated from Y.Nakama’s notebook This competition was about image-to-text translation of images with molecular skeletal strucutures to InChI chemical formula identifiers. InChI=1S/C16H13Cl2NO3/c1-10-2-4-11(5-3-10)16(21)22-9-15(20)19-14-8-12(17)6-7-13(14)18/h2-8H,9H2,1H3,(H,19,20) Solution General Encoder-Decoder concept Most participants used CNN encoder to acquire features with decoder (LSTM/GRU/Transformer) to get text sequences. That’s a casual approach to […]

Read more

An Efficient Pipeline For Bloom’s Taxonomy Using Natural Language Processing

Pipeline-For-NLP-With-Blooms-Taxonomy Pipeline For NLP with Bloom’s Taxonomy Using Improved Question Classification and Question Generation using Deep Learning This repository contains all the source code that is needed for the Project : An Efficient Pipeline For Bloom’s Taxonomy with Question Generation Using Natural Language Processing and Deep Learning. Outline : An examination assessment undertaken by educational institutions is an essential process, since it is one of the fundamental steps to determine a student’s progress and achievements for a distinct subject or […]

Read more

Diverse im2im and vid2vid selfie to anime translation

GANs N’ Roses Pytorch Official PyTorch repo for GAN’s N’ Roses. Diverse im2im and vid2vid selfie to anime translation. Abstract: We show how to learn a map that takes a content code, derived from a face image, and a randomly chosen style code to an anime image. We derive an adversarial loss from our simple and effective definitions of style and content. This adversarial loss guarantees the map is diverse — a very wide range of anime can be produced […]

Read more

Implementation of Uformer, Attention-based Unet, in Pytorch

Uformer – Pytorch Implementation of Uformer, Attention-based Unet, in Pytorch. It will only offer the concat-cross-skip connection. This repository will be geared towards use in a project for learning protein structures. Specifically, it will include the ability to condition on time steps (needed for DDPM), as well as 2d relative positional encoding using rotary embeddings (instead of the bias on the attention matrix in the paper). Install $ pip install uformer-pytorch Usage import torch from uformer_pytorch import Uformer model = […]

Read more

CT-Net: Channel Tensorization Network for Video Classification

CT-Net CT-Net: Channel Tensorization Network for Video Classification @inproceedings{ li2021ctnet, title={{{}CT{}}-Net: Channel Tensorization Network for Video Classification}, author={Kunchang Li and Xianhang Li and Yali Wang and Jun Wang and Yu Qiao}, booktitle={International Conference on Learning Representations}, year={2021}, url={https://openreview.net/forum?id=UoaQUQREMOs} } Overview [2021/6/3] We release the PyTorch code of CT-Net. More details and models will be available. Model Zoo More models will be released in a month… Now we release the model for visualization, please download it from here and put it […]

Read more

A data augmentations library for audio, image, text, and video

AugLy AugLy is a data augmentations library that currently supports four modalities (audio, image, text & video) and over 100 augmentations. Each modality’s augmentations are contained within its own sub-library. These sub-libraries include both function-based and class-based transforms, composition operators, and have the option to provide metadata about the transform applied, including its intensity. AugLy is a great library to utilize for augmenting your data in model training, or to evaluate the robustness gaps of your model! We designed AugLy […]

Read more

Official implementation for TransDA

Official implementation for TransDA Official pytorch implement for “Transformer-Based Source-Free Domain Adaptation”. Overview: Result: Prerequisites: python == 3.6.8 pytorch ==1.1.0 torchvision == 0.3.0 numpy, scipy, sklearn, PIL, argparse, tqdm Prepare pretrain model We choose R50-ViT-B_16 as our encoder. wget https://storage.googleapis.com/vit_models/imagenet21k/R50+ViT-B_16.npz mkdir ./model/vit_checkpoint/imagenet21k mv R50+ViT-B_16.npz ./model/vit_checkpoint/imagenet21k/R50+ViT-B_16.npz Our checkpoints could be find in Dropbox Dataset: Please manually download the datasets Office, Office-Home, VisDA, Office-Caltech from the official websites, and modify the path of images in each ‘.txt’ under the folder ‘./data/’. The […]

Read more
1 601 602 603 604 605 921