Daily Dose of Data Science
Subscribe
Sign in
Home
Sponsor
Premium
Archive
Leaderboard
About
Deep learning
Latest
Top
Discussions
LoRA-derived Techniques for Optimal LLM Fine-tuning
LoRA-variants explained in a beginner-friendly way.
Apr 30
•
Avi Chawla
22
Share this post
LoRA-derived Techniques for Optimal LLM Fine-tuning
blog.dailydoseofds.com
Copy link
Facebook
Email
Note
Other
Train and Test-time Data Augmentation
More data from existing data.
Apr 12
•
Avi Chawla
and
Banias Baabe
19
Share this post
Train and Test-time Data Augmentation
blog.dailydoseofds.com
Copy link
Facebook
Email
Note
Other
A Beginner-friendly Guide to Multi-GPU Training
Learn how to scale models using distributed training.
Apr 6
•
Avi Chawla
34
Share this post
A Beginner-friendly Guide to Multi-GPU Training
blog.dailydoseofds.com
Copy link
Facebook
Email
Note
Other
Is Your Model Data Deficient?
More data may not always help.
Mar 25
•
Avi Chawla
35
Share this post
Is Your Model Data Deficient?
blog.dailydoseofds.com
Copy link
Facebook
Email
Note
Other
11 Powerful Techniques to Supercharge Your ML Models
Take your ML models to the next level.
Mar 22
•
Avi Chawla
40
Share this post
11 Powerful Techniques to Supercharge Your ML Models
blog.dailydoseofds.com
Copy link
Facebook
Email
Note
Other
From PyTorch to PyTorch Lightning
Simplify deep learning model building and training with PyTorch Lightning.
Mar 13
•
Avi Chawla
30
Share this post
From PyTorch to PyTorch Lightning
blog.dailydoseofds.com
Copy link
Facebook
Email
Note
Other
Gradient Accumulation in Neural Networks and How it Works
An underrated technique to train neural networks in memory constrained settings.
Mar 8
•
Avi Chawla
20
Share this post
Gradient Accumulation in Neural Networks and How it Works
blog.dailydoseofds.com
Copy link
Facebook
Email
Note
Other
Augmenting LLMs: Fine-Tuning or RAG?
The trade-offs between fine-tuning and RAG.
Mar 6
•
Avi Chawla
13
Share this post
Augmenting LLMs: Fine-Tuning or RAG?
blog.dailydoseofds.com
Copy link
Facebook
Email
Note
Other
Full-model Fine-tuning vs. LoRA vs. RAG
A visual summary
Mar 1
•
Avi Chawla
56
Share this post
Full-model Fine-tuning vs. LoRA vs. RAG
blog.dailydoseofds.com
Copy link
Facebook
Email
Note
Other
Implementing LoRA from Scratch for Fine-tuning LLMs
Understanding the challenges of traditional fine-tuning and addressing them with LoRA.
Feb 26
•
Avi Chawla
39
Share this post
Implementing LoRA from Scratch for Fine-tuning LLMs
blog.dailydoseofds.com
Copy link
Facebook
Email
Note
Other
Mixed Precision Training
Train large deep learning models efficiently.
Feb 25
•
Avi Chawla
36
Share this post
Mixed Precision Training
blog.dailydoseofds.com
Copy link
Facebook
Email
Note
Other
3
Double Descent vs. Bias-Variance Trade-off
A counterintuitive phenomenon while training ML models.
Feb 19
•
Avi Chawla
51
Share this post
Double Descent vs. Bias-Variance Trade-off
blog.dailydoseofds.com
Copy link
Facebook
Email
Note
Other
2
Share
Copy link
Facebook
Email
Note
Other
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts