Break down your CNN and visualize the features from within the model

Rover Reverse engineer your CNNs, in style. Rover will help you break down your CNN and visualize the features from within the model. No need to write weirdly abstract code to visualize your model’s features anymore. :computer: Usage git clone https://github.com/Mayukhdeb/rover.git; cd rover install requirements: pip install -r requirements.txt from rover import core from rover.default_models import models_dict core.run(models_dict = models_dict) and then run the script with streamlit as: $ streamlit run your_script.py if everything goes right, you’ll see something like: […]

Read more

A PyTorch library and evaluation platform for end-to-end compression research

CompressAI CompressAI (compress-ay) is a PyTorch library and evaluation platform for end-to-end compression research. CompressAI currently provides: custom operations, layers and models for deep learning based data compression a partial port of the official TensorFlow compression library pre-trained end-to-end compression models for learned image compression evaluation scripts to compare learned models against classical image/videocompression codecs Note: Multi-GPU support is now experimental. Installation CompressAI supports python 3.6+ and PyTorch 1.4+. pip: pip install compressai Note: wheels are available for Linux and […]

Read more

Will MLOps change the future of the healthcare system?

This article was published as a part of the Data Science Blogathon Overview: From the advent of modern Science & Technology, Researchers are trying to find a far better solution for the real-time problems we face in our day-to-day life. Technologies like Machine learning, AI, deep learning, and NLP give dynastic and diplomatic solutions in various growing sectors like finance, healthcare, and therefore the retail industry, etc, and making it more and more reliable in production. Will Machine Learning change the […]

Read more

Context Managers and Python’s with Statement

The with statement in Python is a quite useful tool for properly managing external resources in your programs. It allows you to take advantage of existing context managers to automatically handle the setup and teardown phases whenever you’re dealing with external resources or with operations that require those phases. Besides, the context management protocol allows you to create your own context managers so you can customize the way you deal with system resources. So, what’s the with statement good for? […]

Read more

Beginner’s guide before building a Chatbot

This article was published as a part of the Data Science Blogathon According to Accenture – “57% of the Businesses agree that chatbots deliver larger ROI with minimal effort.” Table of Contents : 1. What’s a chatbot? 2. A dive into types of chatbots 3.What are the top platforms to build a chatbot? 4. What are the top Frameworks for building a chatbot? 5. The Algorithm to build a Chatbot. 6. Tips to follow before building your first chatbot 7. Top […]

Read more

Issue #133 – Evaluating Gender Bias in MT

02 Jun21 Issue #133 – Evaluating Gender Bias in MT in Evaluation, The Neural MT Weekly Author: Akshai Ramesh, Machine Translation Scientist @ Iconic Introduction We often tend to personify aspects of life that may vary based upon the beholder’s interpretation. There are plenty of examples for this – “Mother Earth”, Doctor (Men), Cricketer (Men), Nurse(Woman), Cook(Woman), etc. The MT systems are trained with a large amount of parallel corpus which encodes this social bias. If that is the case, […]

Read more

A fine-grained manually annotated named entity recognition dataset

Few-NERD Few-NERD is a large-scale, fine-grained manually annotated named entity recognition dataset, which contains 8 coarse-grained types, 66 fine-grained types, 188,200 sentences, 491,711 entities and 4,601,223 tokens. Three benchmark tasks are built, one is supervised: Few-NERD (SUP) and the other two are few-shot: Few-NERD (INTRA) and Few-NERD (INTER). The schema of Few-NERD is: Few-NERD is manually annotated based on the context, for example, in the sentence “London is the fifth album by the British rock band…“, the named entity London […]

Read more

A 3D Dense mapping backend library of SLAM based on taichi-Lang designed for the aerial swarm

TaichiSLAM This project is a 3D Dense mapping backend library of SLAM based Taichi-Lang, designed for the aerial swarm. Taichi is an efficient domain-specific language (DSL) designed for computer graphics (CG), which can be adopted for high-performance computing on mobile devices. Thanks to the connection between CG and robotics, we can adopt this powerful tool to accelerate the development of robotics algorithms. In this project, I am trying to take advantages of Taichi, including parallel optimization, sparse computing, advanced data […]

Read more

Python library for Junos automation

py-junos-eznc The repo is under active development. If you take a clone, you are getting the latest, and perhaps not entirely stable code. Junos PyEZ is a Python library to remotely manage/automate Junos devices. The user is NOT required: (a) to be a “Software Programmer™”, (b) have sophisticated knowledge of Junos, or (b) have a complex understanding of the Junos XML API. For “Non-Programmers” – Python as a Power Shell This means that “non-programmers”, for example the Network Engineer, can […]

Read more

A C-like hardware description language adding HLS-like automatic pipelining

PipelineC A C-like(1) hardware description language (HDL)(2) adding HLS(high level synthesis)-like automatic pipelining(3) as a language construct/compiler feature. Not actually regular C. But mostly compileable by gcc for doing basic functional verification/’simulation’.This is for convenience as a familiar bare minimum language prototype, not as an ideal end goal. Reach out to help develop something more complex together! Can reasonably replace Verilog/VHDL. Compiler produces synthesizable and human readable+debuggable VHDL. Hooks exist for inserting raw VHDL / existing IP / black boxes. […]

Read more
1 612 613 614 615 616 912