Links for December 17th

Here are some links I found interesting this week.

Before the flood - by Samuel Hammond - Second Best

metaseq/README.md at main · facebookresearch/metaseq

Data2vec 2.0: Highly efficient self-supervised learning for vision, speech and text

How does GPT Obtain its Ability? Tracing Emergent Abilities of Language Models to their Sources

The Annotated S4

The Annotated Transformer

Why S4 is Good at Long Sequence: Remembering a Sequence with Online Function Approximation

A Closer Look at Large Language Models Emergent Abilities

index

[ECCV 2020] NeRF: Neural Radiance Fields (10 min talk) - YouTube

Fishmans - Long Season (Nightcore Mix) - YouTube

[2102.07350] Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm

The Bitter Lesson

Huberman AI

Homemade Sour Strawberry Gummies | Love and Olive Oil

Links for December 10th

Here are some links I found interesting this week.

Automatic Differentiation

The shit guide to training with optimized dreambooth stable diffusion

[2212.01349] Nonparametric Masked Language Modeling

The Chinese Civil Examinations | Hilde De Weerdt | Inference

[2209.11142] A Generalist Neural Algorithmic Learner

Illustrating Reinforcement Learning from Human Feedback (RLHF)

Life After Lifestyle

SerpApi: Google Search API

Links for December 3rd

Here are some links I found interesting this week.

Another data science student's blog – Pointer cache for Language Model

An empirical analysis of compute-optimal large language model training

Training Compute-Optimal Large Language Models

[2211.13319] Make-A-Story: Visual Memory Conditioned Consistent Story Generation

Symmetric derivative - Wikipedia

(2) Lamp「恋人へ」(2004) - YouTube

DALL-E-Explained/README.md at main · simonsanvil/DALL-E-Explained

PyTorch internals : ezyang’s blog

Tales of the M1 GPU - Asahi Linux

[no title]

Retouched - A2 - YouTube

Relaxing music from Gran Turismo #1 (GT2 - 6) - YouTube

Pixelblog - 41 - Isometric Pixel art — SLYNYRD

Button

Links for November 26th

Here are some links I found interesting this week.

Language Models are Few-Shot Learners

mosaicml/composer: Train neural networks up to 7x faster

alantess/transformer: Implementation of a modified vision transformer on the crypto market space

[no title]

Zero-Shot Text-to-Image Generation

Training language models to follow instructions with human feedback

Common Problems  |  Machine Learning  |  Google Developers

Introduction - The Encointer Book

Why I think strong general AI is coming soon - LessWrong

A Census of the Factor Zoo by Campbell R. Harvey, Yan Liu :: SSRN

Magic3D: High-Resolution Text-to-3D Content Creation

Binance Data Collection

[1910.02054] ZeRO: Memory Optimizations Toward Training Trillion Parameter Models

How to fine tune stable diffusion: how we made the text-to-pokemon model at Lambda

diffusers/convert_original_stable_diffusion_to_diffusers.py at main · huggingface/diffusers

Joji Instrumentals to relax to │Rain ambience - YouTube

Stable Diffusion | Qiang Zhang

CLIP: Connecting Text and Images

Long-term reduction in hyperglycemia in advanced type 1 diabetes: the value of induced aerobic glycolysis with BCG vaccinations | npj Vaccines

bleeding edge

8-bit Optimizers via Block-wise Quantization - YouTube

InstructPix2Pix

On the Opportunities and Risks of Foundation Models

Links for November 19th

Here are some links I found interesting this week.

why does zsh start so slowly?

Model Evaluation, Model Selection, and Algorithm Selection in Machine Learning

Nothing has ever angered me more than The Google Play Team

White Boy | BitMEX Blog

[1907.05600] Generative Modeling by Estimating Gradients of the Data Distribution

Dreambooth broken, possibly because of ADAM optimizer, possibly more. · Issue #712 · huggingface/diffusers

Attention Networks: A simple way to understand Cross-Attention | by Geetansh Kalra | Medium