An Alfred workflow to restore iTerm2 window Arrangements

This alfred workflow will list avaliable iTerm2 arrangements and let you select one to restore. Requirements iterm2: Build 3.4.15 (not tested on other versions but myght work) And the following python libraries: pip3 install iterm2 pip3 install pyobjc This workfllow will use the python from iterm2 on ~/Library/ApplicationSupport/iTerm2/Scripts/get_window_arrangements/iterm2env/versions/3.8.6/bin/python3 Usage Just type iTermA on alfred and it will: Launch iterm2 if its closed List the avaliable window arrangements Restore the selected window arrangement Download Check the releases page to download it. […]

Read more

Pytorch ViT for Image classification on the CIFAR10 dataset

Introduction This project uses ViT to perform image classification tasks on DATA set CIFAR10. The implement of Vit and pretrained weight are from https://github.com/asyml/vision-transformer-pytorch. Installation Create environment: conda create –name vit –file requirements.txt conda activate vit Datasets Download the CIFAR10 from http://www.cs.toronto.edu/~kriz/cifar.html, creat data floder and unzip the cifar-10-python.tar.gz in ‘data/’. python main.py GitHub View Github    

Read more

Some help tools for AgoraToken

Support AgoraToken version 001 – 006. But for security reasons, I recommend using version 006 and above. 1 Analyzer The analyzer can help you parse the original content of the AgoraToken, which you can use to check whether it is correct. # Example(appID: 970CA35de60c44645bbae8a215061b33, appCert: 5CFd2fd1755d40ecb72977518be15d3b) python3 analyzer.py 006970CA35de60c44645bbae8a215061b33IACV0fZUBw+72cVoL9eyGGh3Q6Poi8bgjwVLnyKSJyOXR7dIfRBXoFHlEAABAAAAR/QQAAEAAQCvKDdW # Output ## version: 006 ## [Analyze] AccessToken(V6), Signature: 95d1f654070fbbd9c5682fd7b218687743a3e88bc6e08f054b9f229227239747, AppId: 970CA35de60c44645bbae8a215061b33, CRC(ChannelName): 276646071, CRC(Uid): 3847331927, Ts: 1111111, Salt: 1, privilege: 1:1446455471 2 Checker If you want to use a checker, […]

Read more

Deep Learning on SDF for Classifying Brain Biomarkers

To reproduce the results from paper, do the following steps. install pytorch and sparceconov. The website for sparse convolution is here Download the processed dataset from here and unzip it. You would have a folder structure like the following: . +– data +– src +– readme.md +– data.zip cd src ./sh/ad/train00.sh Try different script in the folder sh to reproduce the results in the paper. GitHub View Github    

Read more

A manager for the under-utilized mksession command in vim

ℹ️ Reasoning If you use vim or neovim on a daily basis and work in large codebases, it is probably not uncommon for youto have 10+ tabs open at a time, with various splits. Once you close this vim session the layout is lost to the ethers.the mksession command in vim(neovim) can save you, thus saving the session to a directory, promising to return you to yourwork exactly how you left it. However, the problem is most of us accrue […]

Read more

Point-Set Registrations for Ultrasound Probe Calibrations

-Undergraduate Thesis-Updated January 25, 2022 Author: Matteo Tanzi Year: 2022 Purpose: Undergraduate Thesis Currently, algorithms for point-based, line-based and dynamic registrations are being developped to be ported into 3D Slicer as modules Github Repository Structure This github repository contains Jupyter Notebooks and Python Programs The Python Programs are found within the programs folder – descriptions can be found below The Jupyter Notebooks – Found within Documentation – will outline the research and development techniques used for the point-set registration programs, […]

Read more

Analyze Big Sequence Alignments with PySpark in AWS EMR

This repo hosts my code for the article “Analyze Big Sequence Alignments with PySpark in AWS EMR”. Spark AWS CLI AWS Account Follow the instruction in the article. Once you have uploaded the files into your S3 bucket, run aws emr create-cluster –name “Spark_step_pip” –release-label emr-6.5.0 –applications Name=Spark –log-uri s3://[your_S3_bucket]/logs/ –instance-type m5.xlarge –instance-count 3 –bootstrap-actions Path=s3://[your_S3_bucket]/emr_bootstrap.sh –use-default-roles –auto-terminate –steps “Type=Spark,Name=SparkProgram,ActionOnFailure=CONTINUE,Args=[–deploy-mode,cluster,–master,yarn,–py-files,s3://[your_S3_bucket]/helper_function.py,s3://[your_S3_bucket]/spark_3mer.py,s3://[your_S3_bucket]/test.sam,[your_S3_bucket],sankey.json]”    

Read more
1 243 244 245 246 247 915