- 🔭 I’m currently a third-year Ph.D student in SJTU(IWIN Lab).
- 🌱 I’m currently learning LLM/MLLM, explainable attention, information flow, Truthful AI.
- 💬 欢迎大家找我一起做一些有趣的工作~,目前做的是幻觉/信息流/显著性可解释性.😊😊
- 📫 我的邮箱: framebreak@sjtu.edu.cn. 微信:SemiZxf
- 📕 钱塘江上朝信来,今日方知我是我。😊😊
- 🌱 homepage:(zhangbaijin.github.io)
- 💬 Google sclolar: Google scholar
-
PhD @ SJTU
- Shang Hai
-
15:25
(UTC +08:00) - zhangbaijin.github.io
Pinned Loading
-
From-Redundancy-to-Relevance
From-Redundancy-to-Relevance Public[NAACL 2025 Oral] 🎉 From redundancy to relevance: Enhancing explainability in multimodal large language models
-
FanshuoZeng/Simignore
FanshuoZeng/Simignore Public[AAAI 2025] Code for paper:Enhancing Multimodal Large Language Models Complex Reasoning via Similarity Computation
-
ErikZ719/MCA-LLaVA
ErikZ719/MCA-LLaVA Public[ACM MM25] MCA-LLaVA: Manhattan Causal Attention for Reducing Hallucination in Large Vision-Language Models
Python 11
-
itsqyh/Shallow-Focus-Deep-Fixes
itsqyh/Shallow-Focus-Deep-Fixes Public[EMNLP 2025 Oral] 🎉 Shallow Focus, Deep Fixes: Enhancing Shallow Layers Vision Attention Sinks to Alleviate Hallucination in LVLMs
-
-
Awesome-MLLM-Hallucination
Awesome-MLLM-Hallucination PublicForked from showlab/Awesome-MLLM-Hallucination
📖 A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).
If the problem persists, check the GitHub status page or contact support.