-
Notifications
You must be signed in to change notification settings - Fork 108
Pull requests: InfiniTensor/InfiniCore
Author
Label
Projects
Milestones
Reviews
Assignee
Sort
Pull requests list
issue/1061 - feat: use template to replace int64_t in paged_attentio_prefill kernel for moore gpu
准备好了
模块:算子
类型:优化
#1063
opened Mar 9, 2026 by
spike-zhu
Loading…
issue/1052: NVIDIA和QY增加per_tensor_quant_int8, per_tensor_dequant_int8算子
#1057
opened Mar 6, 2026 by
xgqdut2016
Loading…
通过aten适配器引入flash attention库
准备好了
模块:算子
类型:开发
类型:重构
紧急!
#1034
opened Feb 28, 2026 by
PanZezhong1725
Loading…
issue/978 - metax cuda graph impl and wrappings
准备好了
类型:开发
#982
opened Jan 26, 2026 by
wooway777
Loading…
Issue/843: 增加quant linear支持,增加C610上I8 Gemm, per_channel_quant_i8算子
#965
opened Jan 22, 2026 by
qinyiqun
Loading…
issue/958 - Add read tensor from file feature in dynamic ops test && …
#959
opened Jan 21, 2026 by
baominghelly
Loading…
Issue/951: feat: add paged attention related operator for metax
#953
opened Jan 20, 2026 by
Ceng23333
Loading…
issue/791 修改addRmsNorm接口以及rmsnorm模组
准备好了
模块:算子
类型:优化
#947
opened Jan 19, 2026 by
PanZezhong1725
Loading…
issue/810 support more ops as graph op
准备好了
模块:算子
类型:开发
紧急!
#946
opened Jan 19, 2026 by
PanZezhong1725
Loading…
Previous Next
ProTip!
What’s not been updated in a month: updated:<2026-02-10.