Can Wikipedia Help Offline RL?

Machel Reid, Yutaro Yamada and Shixiang Shane Gu. Our paper is up on arXiv. Overview Official codebase for Can Wikipedia Help Offline Reinforcement Learning?.Contains scripts to reproduce experiments. (This codebase is based on that of https://github.com/kzl/decision-transformer) Instructions We provide code our code directory containing code for our experiments. Installation Experiments require MuJoCo.Follow the instructions in the mujoco-py repo to install.Then, dependencies can be installed with the following command: conda env create -f conda_env.yml Downloading datasets Datasets are stored in the […]

Read more

Wikipedia Extractive Text Summarizer + Keywords Identification (entropy-based)

Uses Beautiful Soup to read Wiki pages, Gensim to summarize, NLTK to process, and extracts keywords based on entropy: everything in one beautiful code. I was looking for similar codes throughout Github but most of them were very difficult to understand and use. I’m building this repo to provide simple, yet effective solution in extractive summarization and keyword identification. Program works best for 300+ words summary. License Please follow license guidelines in usage. GNU General Public License v3.0 Requirements I […]

Read more

A statistics-duelling deck generator using data from wikipedia

A statistics-duelling deck generator using data from wikipedia. Trop Tumps chooses random categories from dbpedia.org and turns them into (mostly-useless) printable decks of cards representing things from that category, complete with exciting statistics. Installation Note: Trop Tumps requires Python 3.6+ The simplest way to install Trop Tumps is using pip. With Python and pip installed, Trop Tumps can be installed from the Python Package Index with: pip install troptumps or directly from the source repository with: pip install git+https://github.com/Frimkron/troptumps#egg=troptumps Alternatively […]

Read more