Friday, October 24, 2025

Winter Vegan Eating — Practical & Local Foods for Maharashtra & Gujarat

Winter Vegan Eating — Practical & Local Foods for Maharashtra & Gujarat

Winter Vegan Eating — Practical & Local Foods for Maharashtra & Gujarat

Intro: Winter is here — with cooler mornings and a bigger appetite comes a temptation to snack on highly processed or heavy foods. In this article we look at the best, affordable, traditional local foods for the winter season in the western region of India (Maharashtra & Gujarat). The goal: simple, nutritious, warming, and fully vegan meal ideas using ingredients you can find in local markets.

Why winter matters

Winter (November–February) is the best season for variety and affordability in this region. Leafy greens, millets and hearty root vegetables are abundant — making it easy to assemble meals that are higher in iron, calcium and plant protein, while still being warming and satisfying.

Core seasonal foods (West India — Maharashtra & Gujarat)

  • Grains: Wheat, jowar, bajra, rice, poha, broken wheat (lapsi)
  • Pulses & legumes: Toor, masoor, chana, urad, moong, matki (moth), sprouted beans
  • Vegetables & greens: Palak (spinach), methi, sarson, bathua, cauliflower, cabbage, carrots, peas, radish, beetroot, lauki, sweet potato
  • Fruits: Guava, amla, oranges, apple (local), custard apple (sitaphal), dates
  • Nuts & seeds: Groundnuts (peanuts), sesame (til), flaxseed, jaggery (gur)

Oil guidance — what to cook with

Traditional Indian cooking often uses high heat. For Maharashtra & Gujarat winters:

OilUseNotes
Groundnut (peanut) oil Main everyday cooking (sabji, dal, frying) High smoke point, mild flavor — best base oil.
Mustard oil High-heat tadka, occasional deep fry High smoke point and warming; pungent flavor — good 1–2×/week.
Sesame (til) oil Finishing oil, chutneys, low-heat dishes Strong flavor and calcium benefits — use as a drizzle or low-heat add.

Practical tip: Use groundnut oil as your base (≈70% of cooking), add mustard oil a couple of times per week, and use sesame oil as a finishing/occasion oil for flavor and calcium.

Weekly Winter Vegan Meal Chart — Ready to use (Mon–Sun)

Portion notes: adjust servings to caloric needs. Each meal aims to combine grains + legumes + vegetables + fruit/seed when possible.

DayBreakfastLunchDinner
Mon Methi thepla + peanut chutney + 1 amla Bajra bhakri, toor dal tadka, gajar–matar sabji, salad Moong–palak khichdi + sesame drizzle + orange
Tue Poha with peas & peanuts + guava Jowar roti, masoor dal, cabbage–peas sabji, beetroot salad Moong usal + steamed rice + carrot soup
Wed Upma with vegetables + dry coconut chutney Chapati, chana dal, sarson–bathua saag, radish salad Matki usal + bajra roti + small til–gur laddoo
Thu Sprouted moong salad + jaggery water Handvo + green chutney + carrot–beet sticks Masoor–palak dal + rice + sautéed lauki–chana dal
Fri Thick vegetable dalia + 2 dates Chapati, moong dal, beetroot–carrot sabji, orange Khichdi with veggies + sesame oil tadka + roasted papad
Sat Sabudana khichdi (vegan) + fruit Jowar bhakri, chole, palak sabji, cabbage salad Toor dal soup + lauki–methi sabji + rice
Sun Lapsi (broken wheat + jaggery) + roasted groundnuts Chapati, mixed dal, cauliflower–peas sabji, beetroot salad Bajra roti, methi–toor dal, til chutney + guava slice

Daily & Weekly Nutrition Logic (quick)

  • Protein: Dals, sprouts, chole, matki — distributed across lunches/dinners.
  • Iron & calcium: Leafy greens (palak, methi, sarson), sesame, jaggery, and millets.
  • Healthy fats: Groundnut/mustard/sesame — rotate to keep balance and heat stability.
  • Vitamin C: Amla, guava, orange — helps iron absorption; include daily.

Cooking & Practical Tips

  1. Prefer groundnut oil for most cooking. Use mustard oil for high-heat tadka and sesame oil as a finishing oil.
  2. Pair iron-rich greens with vitamin C fruits (amla, guava, orange) for better absorption.
  3. Use millets (bajra/jowar) 2–3 times a week for warmth and fiber.
  4. Keep snacks simple: roasted chana, roasted peanuts, makhana or murmura chivda.
  5. Limit deep-fried or heavily processed snacks; if you crave something crunchy, roast or shallow-fry instead.

Sample Grocery Checklist (weekly)

  • Jowar / Bajra flour, wheat flour, poha, broken wheat
  • Toor, masoor, chana, moong, matki (and some sprouted moong for salad)
  • Palak, methi, sarson/bathua (as available), cauliflower, cabbage, carrots, peas, lauki
  • Guava, amla, oranges, apple, dates
  • Groundnuts, sesame seeds, jaggery, a small pack of flaxseed
  • Groundnut oil (main), a bottle of mustard oil, small bottle of sesame oil

Closing

This winter plan follows traditional, locally available foods in Maharashtra & Gujarat and aims to be affordable, nutritious and warming.

Happy winter cooking — simple, local and nourishing!

Sunday, October 5, 2025

SLMs: The Hidden Heroes of the AI Revolution

SLMs: The Hidden Heroes of the AI Revolution

SLMs: The Hidden Heroes of the AI Revolution

In the whirlwind of AI advancements, large language models (LLMs) like GPT-4 or Grok's massive variants often steal the spotlight with their trillion-parameter prowess and headline-grabbing capabilities. But beneath the hype lies a quieter, more pervasive force: Small Language Models (SLMs). These compact AI powerhouses, typically under 10 billion parameters and often as small as 100 million or less, are democratizing technology in ways that giants can't. They're the "hidden AI" running on your smartphone, optimizing edge devices, and fueling niche innovations without the need for data centers or exorbitant energy costs. As of October 2025, platforms like Hugging Face reveal that the bulk of AI development is happening in this small-scale arena, signaling a shift toward efficiency over sheer size.

What Are Small Language Models?

SLMs are streamlined AI models designed for natural language processing (NLP) and beyond, but with a fraction of the parameters found in LLMs. While LLMs boast hundreds of billions (or even trillions) of parameters for broad, general intelligence, SLMs focus on targeted tasks with models under 10B parameters—often distilled from larger ones for efficiency. This makes them ideal for real-world deployment where resources are limited. Techniques like knowledge distillation, quantization, and pruning allow SLMs to maintain high performance while slashing computational demands, enabling them to run on consumer hardware like laptops, phones, or IoT devices.

Unlike LLMs, which require massive training datasets and months of compute time, SLMs can be fine-tuned in days on niche data, making them customizable for specific industries or applications. They're not just smaller; they're smarter in context—offering lower latency, reduced energy use, and enhanced privacy by processing data locally.

The Numbers Speak: A Surge in Small-Scale Development

Hugging Face, the go-to hub for open-source AI models, paints a clear picture of where the action is. As of October 2025, the platform lists a staggering 2,133,656 models in total. Of these, 271,323—or nearly 13%—have 1 billion parameters or fewer, representing the largest single category by far. This dwarfs other ranges, underscoring a developer preference for compact, deployable models over behemoths.

Here's a breakdown of model counts by parameter size groupings:

Parameter RangeNumber of Models
<=1B271,323
<=1B to 3B355,537
3B to 6B52,394
6B to 9B154,376
9B to 12B14,584
12B to 24B33,910
24B to 32B5,847
32B to 64B12,217
64B to 128B11,283
128B to 256B700
256B to 512B1,235
>512B761

This distribution highlights a clear trend: innovation is clustering around smaller models. While LLMs grab attention for their scale, SLMs dominate in volume, reflecting their accessibility for researchers, startups, and hobbyists. In fact, from 2018 to 2025, the explosion in model parameters has been matched by a counter-movement toward miniaturization, as seen in historical charts tracking AI growth.

Tasks Targeted by <1B Parameter Models

Diving deeper, the <1B parameter models on Hugging Face span a diverse array of tasks, from multimodal applications to specialized NLP. This versatility shows how SLMs are filling gaps in practical AI use cases, often outperforming larger models in efficiency for domain-specific needs. Below is an overview of the task distribution:

Multimodal

TaskNumber of Models
Audio-Text-to-Text27
Image-Text-to-Text1,292
Visual Question Answering180
Document Question Answering120
Video-Text-to-Text10
Visual Document Retrieval13
Any-to-Any10

Computer Vision

TaskNumber of Models
Depth Estimation53
Image Classification9,741
Object Detection1,722
Image Segmentation641
Text-to-Image113
Image-to-Text3,020
Image-to-Image19
Image-to-Video1
Unconditional Image Generation4
Video Classification1,305
Text-to-Video4
Zero-Shot Image Classification359
Mask Generation79
Zero-Shot Object Detection40
Text-to-3D2
Image-to-3D26
Image Feature Extraction222
Keypoint Detection17
Video-to-Video1

Natural Language Processing

TaskNumber of Models
Text Classification55,007
Token Classification10,225
Table Question Answering44
Question Answering4,029
Zero-Shot Classification217
Translation1,340
Summarization912
Feature Extraction6,710
Text Generation48,594
Fill-Mask5,875
Sentence Similarity9,295
Text Ranking507

Audio

TaskNumber of Models
Text-to-Speech1,700
Text-to-Audio1,616
Automatic Speech Recognition12,098
Audio-to-Audio20
Audio Classification2,092
Voice Activity Detection3

Tabular

TaskNumber of Models
Tabular Classification5
Tabular Regression1
Time Series Forecasting66

Reinforcement Learning

TaskNumber of Models
Reinforcement Learning369
Robotics44

Other

TaskNumber of Models
Graph Machine Learning13

Text classification and generation lead the pack, but the breadth—from robotics to audio—illustrates SLMs' role in multimodal and edge AI.

Why SLMs Are the Hidden Game-Changers

The advantages of SLMs extend far beyond numbers. They're cost-effective to build and deploy, requiring less energy and hardware—making them a sustainable choice in an era of environmental scrutiny. Businesses are adopting them for specialized tasks, from customer service bots to medical diagnostics, where customization trumps generality. In agentic AI, SLMs are emerging as the future for autonomous systems, with algorithms converting LLMs to SLMs for efficiency.

Embedded in Major Operating Systems: SLMs Powering Everyday Devices

By 2025, SLMs are no longer experimental—they're deeply woven into the fabric of consumer operating systems, enabling seamless, on-device AI experiences. This integration turns everyday devices into intelligent companions, handling tasks like summarization, image analysis, and voice commands without cloud dependency.

  • Windows and Copilot: Microsoft's Phi family of SLMs, including Phi-3 (3.8B parameters) and the newer Phi-3.5, forms the backbone of Copilot+ PCs and the Windows Copilot Runtime. These models support text and vision tasks, running locally via DirectML and ONNX for low-latency features in apps like Paint and Notepad. A standout is Phi Silica, a compact on-device SLM announced at Ignite 2024 and rolled out in Q1 2025, which powers offline Copilot functionalities on Arm-based Windows devices. Developers gained API access in January 2025, allowing custom integrations for enhanced productivity without internet.
  • Apple Devices (iPhones and Macs): Apple Intelligence leverages a suite of on-device foundation language models, including a ~3 billion parameter SLM optimized for Apple Silicon. Rolled out in iOS 18, iPadOS 18, and macOS Sequoia (with expansions in iOS 19 and macOS 16 by mid-2025), it enables privacy-focused features like Writing Tools, notification summaries, and Siri enhancements across iPhones (15 Pro and later), iPads, and Macs. At WWDC 2025, Apple introduced an updated generation of these models, boosting capabilities in visual intelligence and live translation, all processed locally for speed and security.
  • Android (Leading Brands): Google's Gemini Nano (~1-3B parameters), the flagship on-device SLM, is pre-integrated into Android's AICore service for multimodal tasks like transcription and summarization. It's available on Pixel devices (9 series and later) and has expanded to major OEMs: Samsung's Galaxy S24 series, Z Fold 6, Z Flip 6, and S24 FE; Motorola's Edge 50 Ultra (the brand's first Gemini Nano phone); and integrations with OEM apps on Xiaomi and OnePlus devices. By Google I/O 2025, new GenAI APIs made Gemini Nano accessible to third-party developers, enabling smarter apps across these brands without network reliance. Vivo support remains emerging, with potential rollouts in late 2025 flagships.

This OS-level embedding exemplifies SLMs' role in higher education and enterprises for secure, low-cost AI deployment.

Looking Ahead: The SLM Era

As AI evolves, SLMs aren't just hidden—they're essential. With most development focused on models under 1B parameters, they're driving accessibility, innovation, and sustainability. While LLMs push boundaries, SLMs bring AI to the masses, proving that bigger isn't always better. In 2025 and beyond, expect SLMs to power everything from smart homes to personalized learning, quietly revolutionizing our world.

Winter Vegan Eating — Practical & Local Foods for Maharashtra & Gujarat

Winter Vegan Eating — Practical & Local Foods for Maharashtra & Gujarat Winter Vegan Eating — Practical & L...