<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Study-Notes on Nick Liu - Software Engineer</title>
    <link>/tags/study-notes/</link>
    <description>Recent content in Study-Notes on Nick Liu - Software Engineer</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en</language>
    <managingEditor>nickboy@users.noreply.github.com (Nick Liu)</managingEditor>
    <webMaster>nickboy@users.noreply.github.com (Nick Liu)</webMaster>
    <copyright>2026 Nick Liu</copyright>
    <lastBuildDate>Sat, 21 Mar 2026 17:30:33 -0700</lastBuildDate><atom:link href="/tags/study-notes/index.xml" rel="self" type="application/rss+xml" />
    
    <item>
      <title>Finding the Bottom of a Valley Blindfolded: Understanding Gradient Descent</title>
      <link>/posts/ml-gradient-descent/</link>
      <pubDate>Fri, 20 Feb 2026 00:00:00 +0000</pubDate>
      <author>nickboy@users.noreply.github.com (Nick Liu)</author>
      <guid>/posts/ml-gradient-descent/</guid>
      <description>&lt;div class=&#34;lead text-neutral-500 dark:text-neutral-400 !mb-9 text-xl&#34;&gt;&#xA;  Imagine you&amp;rsquo;re &lt;strong&gt;blindfolded on a mountain&lt;/strong&gt; and you need to find the lowest valley. You can&amp;rsquo;t see anything, but you &lt;em&gt;can&lt;/em&gt; feel the ground under your feet. What would you do? You&amp;rsquo;d feel which direction slopes downward, take a small step that way, and repeat. Congratulations — you just invented &lt;strong&gt;gradient descent&lt;/strong&gt;, the algorithm behind nearly every modern AI system.&#xA;&lt;/div&gt;&#xA;&#xA;&#xA;&lt;h2 class=&#34;relative group&#34;&gt;Why Should You Care?&#xA;    &lt;div id=&#34;why-should-you-care&#34; class=&#34;anchor&#34;&gt;&lt;/div&gt;&#xA;    &#xA;    &lt;span&#xA;        class=&#34;absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none&#34;&gt;&#xA;        &lt;a class=&#34;text-primary-300 dark:text-neutral-700 !no-underline&#34; href=&#34;#why-should-you-care&#34; aria-label=&#34;Anchor&#34;&gt;#&lt;/a&gt;&#xA;    &lt;/span&gt;&#xA;    &#xA;&lt;/h2&gt;&#xA;&lt;p&gt;Optimization is everywhere. When your GPS finds the fastest route, when Netflix recommends a movie, when your phone recognizes your face — behind all of these is an algorithm trying to find the &lt;strong&gt;best possible answer&lt;/strong&gt; from a sea of possibilities. Gradient descent is &lt;em&gt;the&lt;/em&gt; workhorse algorithm that makes this happen.&lt;/p&gt;</description>
      
    </item>
    
    <item>
      <title>How Machines Ask Smart Questions: Entropy &amp; Information Gain</title>
      <link>/posts/ml-entropy-and-information-gain/</link>
      <pubDate>Fri, 20 Feb 2026 00:00:00 +0000</pubDate>
      <author>nickboy@users.noreply.github.com (Nick Liu)</author>
      <guid>/posts/ml-entropy-and-information-gain/</guid>
      <description>&lt;div class=&#34;lead text-neutral-500 dark:text-neutral-400 !mb-9 text-xl&#34;&gt;&#xA;  Imagine you&amp;rsquo;re playing &lt;strong&gt;20 Questions&lt;/strong&gt;. You&amp;rsquo;re trying to guess what animal your friend is thinking of. Would you start with &amp;ldquo;Is it a golden retriever?&amp;rdquo; or &amp;ldquo;Does it live in water?&amp;rdquo; The second question is obviously smarter — it eliminates roughly half the possibilities in one shot. Decision trees in machine learning work exactly the same way, and they use &lt;strong&gt;entropy&lt;/strong&gt; and &lt;strong&gt;information gain&lt;/strong&gt; to figure out what the smartest question is.&#xA;&lt;/div&gt;&#xA;&#xA;&#xA;&lt;h2 class=&#34;relative group&#34;&gt;What&amp;rsquo;s the Big Idea?&#xA;    &lt;div id=&#34;whats-the-big-idea&#34; class=&#34;anchor&#34;&gt;&lt;/div&gt;&#xA;    &#xA;    &lt;span&#xA;        class=&#34;absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none&#34;&gt;&#xA;        &lt;a class=&#34;text-primary-300 dark:text-neutral-700 !no-underline&#34; href=&#34;#whats-the-big-idea&#34; aria-label=&#34;Anchor&#34;&gt;#&lt;/a&gt;&#xA;    &lt;/span&gt;&#xA;    &#xA;&lt;/h2&gt;&#xA;&lt;p&gt;When a machine learning algorithm builds a &lt;span class=&#34;flex cursor-pointer&#34;&gt;&#xA;  &lt;span&#xA;    class=&#34;rounded-md border border-primary-400 px-1 py-[1px] text-xs font-normal text-primary-700 dark:border-primary-600 dark:text-primary-400&#34;&gt;&#xA;    Decision Tree&#xA;  &lt;/span&gt;&#xA;&lt;/span&gt;&#xA;&#xA;, it needs to decide which question to ask first. Should it split the data by color? By size? By temperature? The answer comes from a beautifully simple concept: &lt;strong&gt;ask the question that reduces uncertainty the most&lt;/strong&gt;.&lt;/p&gt;</description>
      
    </item>
    
    <item>
      <title>How Neural Networks Learn from Mistakes: Backpropagation Explained</title>
      <link>/posts/ml-backpropagation/</link>
      <pubDate>Fri, 20 Feb 2026 00:00:00 +0000</pubDate>
      <author>nickboy@users.noreply.github.com (Nick Liu)</author>
      <guid>/posts/ml-backpropagation/</guid>
      <description>&lt;div class=&#34;lead text-neutral-500 dark:text-neutral-400 !mb-9 text-xl&#34;&gt;&#xA;  When a factory produces a defective product, how do you trace the problem back through the assembly line to find which worker made the mistake? Neural networks face the exact same challenge. They have layers of &amp;ldquo;workers&amp;rdquo; (neurons), and when the final output is wrong, they need to figure out &lt;strong&gt;who&amp;rsquo;s responsible&lt;/strong&gt; — and by how much. The algorithm that solves this is called &lt;strong&gt;backpropagation&lt;/strong&gt;, and it&amp;rsquo;s the reason deep learning works at all.&#xA;&lt;/div&gt;&#xA;&#xA;&#xA;&lt;h2 class=&#34;relative group&#34;&gt;Neural Networks Are Everywhere&#xA;    &lt;div id=&#34;neural-networks-are-everywhere&#34; class=&#34;anchor&#34;&gt;&lt;/div&gt;&#xA;    &#xA;    &lt;span&#xA;        class=&#34;absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none&#34;&gt;&#xA;        &lt;a class=&#34;text-primary-300 dark:text-neutral-700 !no-underline&#34; href=&#34;#neural-networks-are-everywhere&#34; aria-label=&#34;Anchor&#34;&gt;#&lt;/a&gt;&#xA;    &lt;/span&gt;&#xA;    &#xA;&lt;/h2&gt;&#xA;&lt;p&gt;Before we dive into how neural networks &lt;em&gt;learn&lt;/em&gt;, let&amp;rsquo;s appreciate what they do. The phone in your pocket uses neural networks for face recognition, voice transcription, photo enhancement, and text prediction. Self-driving cars, medical image analysis, language translation — all neural networks.&lt;/p&gt;</description>
      
    </item>
    
  </channel>
</rss>
