Skip to content

Pull requests: PaddlePaddle/PaddleNLP

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Reviews
Assignee
Filter by who’s assigned
Sort

Pull requests list

[Tokenizer] Unify tokenizer _pad
#9280 opened Oct 16, 2024 by DrownFish19 Loading…
Gpt 13b auto parallel
#9278 opened Oct 16, 2024 by blacksheep-Aristotle Loading…
[intel_hpu] initial commit for intel_hpu support
#9273 opened Oct 15, 2024 by yanfeich Loading…
[CI] Skip inference test cases
#9270 opened Oct 15, 2024 by DrownFish19 Loading…
Adding LoKrModel Class to paddle.peft library
#9269 opened Oct 15, 2024 by WhuanY Loading…
support attention mask using causal=True
#9268 opened Oct 15, 2024 by GuoxiaWang Loading…
[for test]Apex grabs the entire global_step
#9265 opened Oct 14, 2024 by tianhaodongbd Loading…
[FlashMask] Add FlashMask for Qwen2
#9264 opened Oct 14, 2024 by DrownFish19 Loading…
Support llama13b finetune with iluvatar corex.
#9252 opened Oct 12, 2024 by tianyuzhou668 Loading…
[LLM] Add deepseekv2
#9250 opened Oct 12, 2024 by DrownFish19 Loading…
args支持自动保存本地json
#9247 opened Oct 11, 2024 by luoyuedong Loading…
[LLM INFER] Append attn
#9244 opened Oct 11, 2024 by yuanlehome Loading…
[BugFix] Update predictor.py
#9243 opened Oct 11, 2024 by ZHUI Loading…
[Inference] Remove useless code in quant attention
#9231 opened Oct 9, 2024 by lixcli Loading…
[Readme] Add flash mask
#9219 opened Sep 30, 2024 by lugimzzz Loading…
[NPU] Add chatglmv3-6b 多硬件
#9213 opened Sep 27, 2024 by Jakin-huang Loading…
[llm]support long sequence training
#9208 opened Sep 26, 2024 by lugimzzz Loading…
ProTip! Exclude everything labeled bug with -label:bug.