AI & ML interests

Breaking the opacity of language models for legal professionals 📖 Join us by smashing the button at top right 🤗

Recent Activity

AdinaY 
posted an update about 16 hours ago
view post
Post
170
Wechat AI is shipping!

WeDLM 🔥 A new language model that generates tokens in parallel, making it faster than standard LLMs , with the same Transformer setup!
https://huggingface.co/collections/tencent/wedlm

✨ 7B/8B - Base & Instruct
✨ Apache 2.0
AdinaY 
posted an update about 18 hours ago
Nymbo 
posted an update 1 day ago
view post
Post
933
Genuine recommendation: You should really use this AutoHotKey macro. Save the file as macros.ahk and run it. Before sending a prompt to your coding agent, press Ctrl + Alt + 1 and paste your prompt to any regular chatbot. Then send the output to the agent. This is the actual, boring, real way to "10x your prompting". Use the other number keys to avoid repeating yourself over and over again. I use this macro prolly 100-200 times per day. AutoHotKey isn't as new or hype as a lot of other workflows, but there's a reason it's still widely used after 17 years. Don't overcomplicate it.

; Requires AutoHotkey v1.1+

; All macros are `Ctrl + Alt + <variable>`

^!1::
    Send, Please help me more clearly articulate what I mean with this message (write the message in a code block):
return

^!2::
    Send, Please make the following changes:
return

^!3::
    Send, It seems you got cut off by the maximum response limit. Please continue by picking up where you left off.
return


In my experience the past few months, Ctrl + Alt + 1 works best with Instruct models (non-thinking). Reasoning causes some models to ramble and miss the point. I've just been using GPT-5.x for this.
AdinaY 
posted an update 2 days ago
AdinaY 
posted an update 2 days ago
view post
Post
194
Daily Papers just got an AI reading assistant 🔥

You can ask any question you want: clarify a paragraph, get a short summary...all without leaving the page!

✨ Powered by HuggingChat + Hugging Face MCP server
AdinaY 
posted an update 4 days ago
view post
Post
1748
Chinese open source AI in December 2025 was about the stack coming together: open, end to end, and ready to ship 🔥

https://huggingface.co/collections/zh-ai-community/december-2025-china-open-source-highlights

✨ Big wave of foundation models: still scaling, but efficiency, reasoning, and deployment now matter more than size
- DeepSeek-V3.2
- Z.ai GLM-4.7
- MiniMax-M2.1
- Xiaomi: MiMo-V2-Flash

✨ Multimodal reasoning is now default
- Z.ai GLM-4.6V
- Z.ai AutoGLM-Phone 9B
- Bytedance: Dolphin-v2

✨ Image & video: editable assets and real workflows
- Qwen-Image-Layered / Image-2512
- Meituan: LongCat-Image & Image Edit
- AIDC: Ovis-Image-7B
- Live-Avatar / LongCat-Video-Avatar
- HY-WorldPlay / RealVideo

✨ Audio goes edge ready
- GLM-ASR-Nano / Fun-ASR-Nano
- GLM-TTS / VoxCPM1.5
- CosyVoice 0.5B

✨ The quiet backbone: data & infrastructure
- Finch (FinWorkBench)
- Tencent ARC: TimeLens-100K
- BIGAI: TongSIM-Asset
- MiniMax: VTP-Base

✨ Also congrats on Minimax and Z.ai announced their IPOs and Moonshot announced a new $500M funding round 🔥

Like everyone else, I was OOO at the end of December, so feel free to share (in comments or PR) any I missed in this list!
AdinaY 
posted an update 4 days ago
AdinaY 
posted an update 5 days ago
view post
Post
3572
2025.1 - DeepSeek entered the scene, backed by High Flyer Quant
2026.1 - IQuest enters the game, backed by Uniquant Quant 📈 and launching IQuest-Coder on huggingface
https://huggingface.co/collections/IQuestLab/iquest-coder

✨ 40B models: Instruct / Thinking / Loop
✨ Loop = MoE-level performance with only ~5% extra training cost
✨ Native 128K context
  • 1 reply
·
AdinaY 
posted an update 21 days ago
Nymbo 
posted an update 21 days ago
view post
Post
1990
🚨 New tool for the Nymbo/Tools MCP server: The new Agent_Skills tool provides full support for Agent Skills (Claude Skills but open-source).

How it works: The tool exposes the standard discover/info/resources/validate actions. Skills live in /Skills under the same File_System root, and any bundled scripts run through Shell_Command, no new infrastructure required.

Agent_Skills(action="discover")  # List all available skills
Agent_Skills(action="info", skill_name="music-downloader")  # Full SKILL.md
Agent_Skills(action="resources", skill_name="music-downloader")  # Scripts, refs, assets


I've included a music-downloader skill as a working demo, it wraps yt-dlp for YouTube/SoundCloud audio extraction.

Caveat: On HF Spaces, Shell_Command works for most tasks, but some operations (like YouTube downloads) are restricted due to the container environment. For full functionality, run the server locally on your machine.

Try it out ~ https://www.nymbo.net/nymbot
AdinaY 
posted an update 24 days ago
view post
Post
4593
Finch 💰 an enterprise-grade benchmark that measures whether AI agents can truly handle real world finance & accounting work.

FinWorkBench/Finch

✨ Built from real enterprise data (Enron + financial institutions), not synthetic tasks
✨ Tests end-to-end finance workflows
✨ Multimodal & cross-file reasoning
✨ Expert annotated (700+ hours) and genuinely challenging hard
Nymbo 
posted an update about 1 month ago
view post
Post
5196
🚀 I've just shipped a major update to the Nymbo/Tools MCP server: the Agent_Terminal, a single "master tool" that cuts token usage by over 90%!

Anthropic found 98.7% context savings using code execution with MCP, Cloudflare published similar findings. This is my open-source implementation of the same idea.

# The Problem

Traditional MCP exposes every tool definition directly to the model. With 12 tools, that's thousands of tokens consumed *before the conversation even starts*. Each tool call also passes intermediate results through the context window — a 10,000-row spreadsheet? That's all going into context just to sum a column.

# The Solution: One Tool to Rule Them All

Agent_Terminal wraps all 12 tools (Web_Search, Web_Fetch, File_System, Generate_Image, Generate_Speech, Generate_Video, Deep_Research, Memory_Manager, Obsidian_Vault, Shell_Command, Code_Interpreter) into a single Python code execution gateway.

Instead of the model making individual tool calls, it writes Python code that orchestrates the tools directly:

# Search for Bitcoin price
result = Web_Search("current price of bitcoin", max_results=3)
print(result)


Don't know what tools are available? The agent can discover them at runtime:

print(search_tools('image'))  # Find tools by keyword
print(usage('Generate_Image'))  # Get full docs for a specific tool


The individual direct tool calls are all still there, but they can be disabled if using the Agent_Terminal. Try it now - https://www.nymbo.net/nymbot
  • 1 reply
·
lunarflu 
posted an update 2 months ago
lunarflu 
posted an update 2 months ago
lunarflu 
posted an update 2 months ago
view post
Post
2760
💸🤑You don’t need 100 GPUs to train something amazing!

Our Smol Training Playbook teaches you a better path to world-class LLMs, for free!

Check out the #1 trending space on 🤗 :
HuggingFaceTB/smol-training-playbook
AdinaY 
posted an update 2 months ago
view post
Post
3379
Kimi K2 Thinking is now live on the hub 🔥

moonshotai/Kimi-K2-Thinking

✨ 1T MoE for deep reasoning & tool use
✨ Native INT4 quantization = 2× faster inference
✨ 256K context window
✨ Modified MIT license
AdinaY 
posted an update 2 months ago
view post
Post
733
Chinese open source AI in October wasn’t about bigger models, it was about real world impact 🔥

https://huggingface.co/collections/zh-ai-community/october-2025-china-open-source-highlights

✨ Vision-Language & OCR wave 🌊
- DeepSeek-OCR : 3B
- PaddleOCR-VL : 0.9B
- Qwen3-VL : 2B / 4B / 8B / 32B /30B-A3B
- Open-Bee: Bee-8B-RL
- http://Z.ai Glyph :10B

OCR is industrializing, the real game now is understanding the (long context) document, not just reading it.

✨ Text generation: scale or innovation?
- MiniMax-M2: 229B
- Antgroup Ling-1T & Ring-1T
- Moonshot Kimi-Linear : linear-attention challenger
- Kwaipilot KAT-Dev

Efficiency is the key.

✨ Any-to-Any & World-Model : one step forward to the real world
- BAAI Emu 3.5
- Antgroup Ming-flash-omni
- HunyuanWorld-Mirror: 3D

Aligning with the “world model” globally

✨ Audio & Speech + Video & Visual: released from entertainment labs to delivery platforms
- SoulX-Podcast TTS
- LongCat-Audio-Codec & LongCat-Video by Meituan delivery paltform
- xiabs DreamOmni 2

Looking forward to what's next 🚀