วันเสาร์ที่ 2 พฤษภาคม พ.ศ. 2569

Prompts for generating Notebooklm slide deck

https://github.com/serenakeyitan/awesome-notebookLM-prompts?tab=readme-ov-file#modern-newspaper

Enter prompts into the description box within Notebooklm's Slide deck setting (click right arrow on the button to go to its setting).

Convert image-based slides to editable slides

 https://codia.ai/noteslide

Image-based slides e.g. NotebookLM-generated slides

วันศุกร์ที่ 1 พฤษภาคม พ.ศ. 2569

Hugging face

Hugging Face is a leading open-source AI community and platform often called the "GitHub of Machine Learning." It provides a central hub for researchers and developers to share, discover, and collaborate on AI models, datasets, and applications, with a major focus on Natural Language Processing (NLP) and generative AI.

วันพฤหัสบดีที่ 30 เมษายน พ.ศ. 2569

Is norm-referenced grading ranking or rating or both?

 Norm-referenced grading is primarily ranking, but it can involve both.

Why primarily ranking?

Norm-referenced grading compares students against each other, not against a fixed standard.
So the main purpose is to determine relative position:

  • top 10%

  • above average

  • class percentile

  • rank #5 out of 80

That is fundamentally ranking.

Where rating appears

Students still receive scores/grades (raw marks, scaled scores, GPA, letter grades), which are forms of rating. But in norm-referenced systems, those ratings are often assigned based on ranking.

Example:

  • Top 15% → A

  • Next 25% → B

  • Middle 40% → C

So the grade (rating) comes from the student's relative rank.

Short answer

Norm-referenced grading = ranking-driven rating system.

Contrast

  • Criterion-referenced grading = rating based on fixed standards (e.g., 90% = A regardless of others)

  • Norm-referenced grading = rating based on position relative to peers i.e. ranking

If you're writing academically, I’d phrase it as:
“Norm-referenced grading is a ranking-oriented assessment approach that may produce rating outcomes such as grades.”

วันพุธที่ 29 เมษายน พ.ศ. 2569

Randomness versus probability

We can say probability is a tool used to measure randomness.

Randomness is the phenomenon (the "what"), and probability is the mathematical measurement of the randomness (the "how much").


So if you are measuring the results of LLM which are unpredictable you’re supposed to use probability measurement not just three case average and standard deviation.

Why does LLM never give the same answer?

Because LLM uses public RAG (so if you ask LLM to generate code and new version library is released the code will not be the same) or search engine, probability and randomness, learning through interaction with users. This is why LLM is highly dynamic and evolving.

Token meaning and pricing in LLM

 A token is the basic unit of data that Gemini models use for input and output. For text, one token is about 4 characters or 0.75 words. For other media, it represents a fixed "slice" of information, such as a patch of pixels or a fraction of a second.

Token Meanings by Media Type
All media is converted into tokens to fit within the model's 1M to 2M token context window.
  • Text: Approximately 4 characters per token. Standard English text averages about 750 words per 1,000 tokens.
  • Images: Small images (≤384px) count as 258 tokens. Larger images are divided into 768x768 pixel blocks, each costing 258 tokens.
  • Video: Typically converted at 263 tokens per second.
  • Audio: Typically converted at 32 tokens per second.
  • Reasoning: Models like 
    Gemini 3.1 Pro
     generate internal "thinking tokens" during complex tasks, which are billed as output tokens

Token Pricing (Per 1 Million Tokens)
Pricing is pay-as-you-go, with different rates for input (data read by the model) and output (data generated by the model). Rates often double if the total context exceeds 200,000 tokens.
Gemini Model TierInput Price (≤200k)Output Price (≤200k)Input Price (>200k)Output Price (>200k)
3.1 Pro (Preview)$2.00$12.00$4.00$18.00
$1.25$10.00$2.50$15.00
2.5 Flash$0.30$2.50N/A*N/A*
2.5 Flash-Lite$0.10$0.40N/A*N/A*
*Flash models typically have flat pricing regardless of context length up to their limit.
Cost Reduction Features
  • Context Caching: Reusing large datasets (e.g., long PDFs) can reduce input costs by 90%, with rates as low as $0.01 to $0.20 per 1M tokens.
  • Batch API: Submitting non-urgent tasks for asynchronous processing provides a 50% discount on standard paid rates.
  • Free Tier: Available through Google AI Studio for development, typically capped at 1,500 requests per day.

CodeQL

Apowerful, open-source semantic code analysis engine used to find security vulnerabilities and bugs by querying code as data. It treats source code as a relational database, allowing developers to write custom queries in a logic-based language (QL) to identify complex patterns, security flaws (SAST), and variants of known bugs.

Key Aspects of CodeQL:
  • How it Works: CodeQL converts code into a searchable database containing relational data, abstract syntax trees, and control flow.
  • Usage Modes: It can be used via the CodeQL CLI for CI/CD pipelines, integrated directly into GitHub Actions for automatic scanning, or through the CodeQL extension for VS Code.
  • Languages Supported: Supports major languages including C/C++, C#, Java/Kotlin, JavaScript/TypeScript, Python, Go, Ruby, and Swift.
  • Variant Analysis: More than 400 CVEs have been identified using CodeQL, making it highly effective for finding similar vulnerabilities across large codebases.

  • Example 
  • import javascript
    
    from BlockStmt b
    where b.getNumStmt() = 0
    select b, "This is an empty code block."
    
    • import javascript: Loads the standard CodeQL library for JavaScript.
    • from BlockStmt b: Defines a variable b that represents any "Block Statement" (code inside curly braces).
    • where b.getNumStmt() = 0: Filters for blocks where the number of statements is exactly zero.
    • select b, "...": Outputs the location of the empty block along with a descriptive message.

วันอังคารที่ 28 เมษายน พ.ศ. 2569

Vibe coding IDEs

CURSOR is vibe coding IDE. It is a VS Code fork integrated with LLMs, allowing users to build applications through conversation

Vibe coding is a software development approach where users, often with little coding experience, prompt AI tools to generate, debug, and refine applications using natural language

Google AI Studio (prompt-to-app)

ที่ฝึกสหกิจที่ดีที่สุด

  1. หน่วยงานที่เราจะไปทำงานด้วยหลังจบแล้ว
  2. หน่วยงานที่มีชื่อเสียง เช่นบริษัทใหญ่หรือหน่วยงานของรัฐ
  3. หน่วยงานที่ให้เราเข้าร่วมโครงการที่เป็นที่รู้จัก เช่นโครงการเกี่ยวกับโครงสร้างพื้นฐานของประเทศ

Hypermarket

massive retail facility combining a supermarket and a department store under one roof

Product Mix: Combines full grocery lines with general merchandise (furniture, electronics, clothing).

วันอาทิตย์ที่ 26 เมษายน พ.ศ. 2569

Vibe Coding To Agentic coding

 Vibe Coding (คือการใช้ prompt สั่ง ai เขียนโค้ด)

This is the "buzziest" term of 2026. Coined by Andrej Karpathy, vibe coding refers to a high-level approach where the programmer focuses on the "vibe" (the intent and desired outcome) rather than the syntax.  

The Workflow: You describe the goal in natural language, and the AI handles the implementation.  

The Philosophy: It treats English (or your preferred natural language) as the "hottest new programming language."  

2. Agentic Coding

When you move beyond simple snippets to full-scale development, it's called agentic coding. This involves using AI Agents that don't just write code but also:  

• Plan the architecture.  

• Execute the code in a sandbox.

• Debug errors by reading stack traces.

• Iteratively improve the output until it passes unit tests.  


วันศุกร์ที่ 24 เมษายน พ.ศ. 2569

RAG vs Agentic RAG

 













Explain RAG step 5:

In this step, the system takes two distinct inputs and merges them into a single coherent structure:

The Original Query: The initial question or prompt provided in Step 1.

The Retrieved Context: The specific "Similar docs" or text chunks found during the Similarity Search in Step 4. These are the external facts that the LLM was not originally trained on (such as private documents or real-time news).

Why this step matters

The goal of Step 5 is to construct a Rich Prompt. Instead of just asking the LLM a question blindly, the system essentially says:

"Using the following reference material: [Retrieved Context], please answer this question: [Query]."



วันอังคารที่ 21 เมษายน พ.ศ. 2569

Asymptotic analysis

การวิเคราะห์เชิงเส้นกำกับ: วิธีการหาฟังก์ชันอย่างง่าย (g(n)) มาแทนฟังก์ชันที่ซับซ้อน (f(n)) เพื่อดูการเติบโต เมื่อ n มีค่าสูงมากๆ

Big-O is an Asymptotic Notation for the worst case 

https://www.engati.ai/glossary/asymptotic-notation

วันอังคารที่ 7 เมษายน พ.ศ. 2569

Middleware in modern software architecture

 The term middleware in modern software architecture also refers to authentication layer or authorization layer.

Free remote desktop application

 https://anydesk.com/

วันจันทร์ที่ 6 เมษายน พ.ศ. 2569

Typescript & Backend Typescript framework

 Backend Typescript framework  https://elysiajs.com/


Typescript:

TypeScript is a "superset" of JavaScript that adds static typing.

Think of it as JavaScript with a built-in spellchecker for your logic. While standard JavaScript lets you be flexible (sometimes to a fault), TypeScript forces you to define your data structures upfront, catching bugs before you even run your code.

The Essentials

  • The Workflow: You write .ts files, the compiler checks for errors, and then converts it into clean .js for the browser to run.

  • The Main Benefit: It prevents the dreaded "TypeErrors" (like trying to multiply a number by a string) that commonly crash apps.

  • The Catch: It requires an extra build step and a bit more initial setup than plain JavaScript.