December 11, 2024 - How devs spend their time

The Breakpoint

Greetings, cracked devs. Welcome back to another edition of the Breakpoint. In this weeks edition, we’ve got: a ton of amazing products from the leaderboard, a look into how developers spend their time (According to Amazon). Let’s dive in.

The Latest

Five of the most interesting recent dev tool (or dev tool-adjacent) launches on the site. 

Reflex Cloud lets you build web apps in pure Python (no Javascript required) and deploy with a single command. It includes a dashboard to manage your apps, and custom domains.

Athina is an AI development platform that enables AI teams to build, test, and monitor LLM-powered applications. Teams can collaborate on prompts, flows, datasets, and more.

Kestra is an open-source orchestration platform that bridges the gap with low-code UI, full-code API, Git sync, 600+ plugins, and remote execution.

KushoAI is a Google Sheets Extension that writes API tests for you- just like a human SDET or QA would. Just add basic API information and it writes an exhaustive set of tests from scratch.

DataFuel API scrapes entire websites and knowledge bases in a single query. Get clean, markdown-structured web data instantly for your RAG systems and AI models.

How devs spend their time

Amazon just dropped a stat that might make you rethink your workflow: developers spend less than an hour a day actually coding. The rest? Bogged down by tests, reviews, and documentation. Enter Amazon Q Developer, their new AI-powered assistant designed to handle the grunt work—think unit testing, code reviews, and even generating documentation.

The goal? Free up your brain for the fun part: actual coding. It's a clever play to boost developer creativity and efficiency, especially as AI tools keep reshaping what “productivity” looks like in tech.

Bonus

Check out a new post on our website A Founder’s Guide to AI Fine-Tuning by Kyle Corbitt, Founder and CEO of OpenPipe.

Fine-tuning is the process of steering model behavior by updating their actual weights, as opposed to prompting, which involves only rewriting the instructions or adding additional samples to the context. Compared to prompting, fine-tuning gives you much deeper and more nuanced control of a model’s behavior. On the other hand, when done incorrectly it can lead to conditions like “catastrophic forgetting” where the model actually gets much worse instead of better. Fine-tuning is a power tool!

What did you think of today's newsletter?

Login or Subscribe to participate in polls.