小伙伴关心的问题:写论文代码,论文代码怎么写,本文通过数据整理汇集了写论文代码,论文代码怎么写相关信息,下面一起看看。

写论文代码,论文代码怎么写

关注并星标

从此不迷路

计算机视觉研究院

计算机视觉研究院专栏

作者:Edison_G

这个是”计算机视觉研究院“新推出的模块,后期我们会陆续为大家带来最新文章及技术的代码实现分享!

《Towards Layer-wise Image Vectorization》(CVPR 2022)

GitHub: github.com/ma-xu/LIVE

Installation

We suggest users to use the conda for creating new python environment.

Requirement: 5.0<GCC<6.0; nvcc >10.0.

git clone git@github.com:ma-xu/LIVE.gitcd LIVEconda create -n live python=3.7conda activate liveconda install -y pytorch torchvision -c pytorchconda install -y numpy scikit-imageconda install -y -c anaconda cmakeconda install -y -c conda-forge ffmpegpip install svgwrite svgpathtools cssutils numba torch-tools scikit-fmm easydict visdompip install opencv-python==4.5.4.60 # please install this version to avoid segmentation fault.cd DiffVGgitsubmodule update --init --recursivepythonsetup.py installcd ..

Run Experiments

conda activate livecd LIVE# Please modify the paramters accordingly.python main.py --config <config.yaml> --experiment <experiment-setting> --signature <given-folder-name> --target <input-image> --log_dir <log-dir># Here is an simple example:python main.py --config config/base.yaml --experiment experiment_5x1 --signature *** ile --target figures/ *** ile.png --log_dir log/《Multimodal Token Fusion for Vision Transformers》(CVPR 2022) GitHub: github.com/yikaiw/TokenFusion《PointAugmenting: Cross-Modal Augmentation for 3D Object Detection》(CVPR 2022) GitHub: github.com/VISION-SJTU/PointAugmenting

《Fantastic questions and where to find them: FairytaleQA -- An authentic dataset for narrative comprehension.》(ACL 2022)

GitHub: github.com/uci-soe/FairytaleQAData

《LUNAR: Unifying Local Outlier Detection Methods via Graph Neural Networks》(AAAI 2022) GitHub: github.com/agoodge/LUNARFirstly, extract data.zipTo replicate the results on the HRSS dataset with neighbour count k = 100 and "Mixed" negative sampling schemeExtract saved_models.zipRun:python3 main.py --dataset HRSS --samples MIXED --k 100

To train a new model:

python3 main.py --dataset HRSS --samples MIXED --k 100 --train_new_model《Pseudo-Label Transfer from Frame-Level to Note-Level in a Teacher-Student Framework for Singing Transcription from Polyphonic Music》(ICASSP 2022) GitHub: github.com/keums/icassp2022-vocal-transcription《Robust Disentangled Variational Speech Representation Learning for Zero-shot Voice Conversion》(ICASSP 2022) GitHub: github.com/jlian2/Robust-Voice-Style-TransferDemo:https://jlian2.github.io/Robust-Voice-Style-Transfer/

《HandoverSim: A Simulation Framework and Benchmark for Human-to-Robot Object Handovers》(ICRA 2022)

GitHub: github.com/NVlabs/handover-sim

2022-06-03 16:13:46: Running evaluation for results/2022-02-28_08-57-34_yang-icra2021_s0_test2022-06-03 16:13:47: Evaluation results:| success rate | mean accum time (s) | failure (%) || (%) | exec | plan | total | hand contact | object drop | timeout ||:---------------:|:------:|:------:|:-------:|:---------------:|:---------------:|:--------------:|| 64.58 ( 93/144) |4.864| 0.036 |4.900| 17.36 ( 25/144) |11.81 ( 17/144) | 6.25 ( 9/144) |2022-06-0316:13:47: Printing scene ids2022-06-0316:13:47: Success (93 scenes):--- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---0123456789101213151617181921222325262728303334353637384243464950535456596062636466686970717277818385878991929394959698103106107108109110111112113114115116117120121123125126127128130131132133137138139141143--- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---2022-06-0316:13:47: Failure - hand contact (25 scenes):--- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---1114202939404144454751555758656774808288102105118124136--- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---2022-06-0316:13:47: Failure - object drop (17 scenes):--- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---24313252617879848697101104119122134140142--- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---2022-06-0316:13:47: Failure - timeout (9 scenes):--- --- --- --- --- --- --- --- ---487375769099100129135--- --- --- --- --- --- --- --- ---2022-06-0316:13:47: Evaluation complete.

《CDLM: Cross-Document Language Modeling》(EMNLP 2021)

GitHub: github.com/aviclu/CDLM

You can either pretrain by yourself or use the pretrained CDLM model weights and tokenizer files, which are available on HuggingFace.

Then, use:

from transformers import AutoTokenizer, AutoModel# load model and tokenizertokenizer = AutoTokenizer.from_pretrained(biu-nlp/cdlm)model = AutoModel.from_pretrained(biu-nlp/cdlm)

《Continual Learning for Task-Oriented Dialogue Systems》(EMNLP 2021)

GitHub: github.com/andreamad8/ToDCL

《Torsional Diffusion for Molecular Conformer Generation》(2022)

GitHub: github.com/gcorso/torsional-diffusion

《MMChat: Multi-Modal Chat Dataset on Social Media》(2022)

GitHub: github.com/silverriver/MMChat

《Can CNNs Be More Robust Than Transformers?》(2022)

GitHub: github.com/UCSC-VLAA/RobustCNN

《Revealing Single Frame Bias for Video-and-Language Learning》(2022)

GitHub: github.com/jayleicn/singularity

《Progressive Distillation for Fast Sampling of Diffusion Models》(2022)

GitHub: github.com/Hramchenko/diffusion_distiller

《Neural Basis Models for Interpretability》(2022)

GitHub: github.com/facebookresearch/nbm-spam

《Scalable Interpretability via Polynomials》(2022)

GitHub: github.com/facebookresearch/nbm-spam

《Infinite Recommendation Networks: A Data-Centric Approach》(2022)

GitHub: github.com/noveens/infinite_ae_cf

《The GatedTabTransformer. An enhanced deep learning architecture for tabular modeling》(2022)

GitHub: github.com/radi-cho/GatedTabTransformer

Usage:

import torchimport torch.nn as nnfrom gated_tab_transformer import GatedTabTransformermodel = GatedTabTransformer( categories = (10, 5, 6, 5, 8), # tuple containing the number of unique values within each category num_continuous = 10, # number of continuous values transformer_dim = 32, # dimension, paper set at 32 dim_out = 1, # binary prediction, but could be anything transformer_depth = 6, # depth, paper recommended 6 transformer_heads = 8, # heads, paper recommends 8 attn_dropout = 0.1, # post-attention dropout ff_dropout = 0.1, # feed forward dropout mlp_act = nn.LeakyReLU(0), # activation for final mlp, defaults to relu, but could be anything else (selu, etc.) mlp_depth=4, # mlp hidden layers depth mlp_dimension=32, # dimension of mlp layers gmlp_enabled=True # gmlp or standard mlp)x_categ = torch.randint(0, 5, (1, 5)) # category values, from 0 - max number of categories, in the order as passed into the constructor abovex_cont = torch.randn(1, 10) # assume continuous values are already normalized individuallypred = model(x_categ, x_cont)print(pred)

《Distract Your Attention: Multi-head Cross Attention Network for Facial Expression Recognition》(2022)

GitHub: github.com/yaoing/DAN

《Towards Principled Disentanglement for Domain Generalization》(2021)

GitHub: github.com/hlzhang109/DDG

《SoundStream: An End-to-End Neural Audio Codec》(2021)

GitHub: github.com/we *** z/SoundStream

© THE END

计算机视觉研究院学习群等你加入!

计算机视觉研究院主要涉及深度学习领域,主要致力于人脸检测、人脸识别,多目标检测、目标跟踪、图像分割等研究方向。研究院接下来会不断分享最新的论文算法新框架,我们这次改革不同点就是,我们要着重”研究“。之后我们会针对相应领域分享实践过程,让大家真正体会摆脱理论的真实场景,培养爱动手编程爱动脑思考的习惯!

计算机视觉研究院

更多写论文代码,论文代码怎么写相关信息请关注本站,本文仅仅做为展示!