วันพฤหัสบดีที่ 18 กันยายน พ.ศ. 2568

Evaluate RAG

https://towardsdatascience.com/evaluating-your-rag-solution/

Notebooklm is RAG as we can add files to let users ask anything about the files.

วันเสาร์ที่ 13 กันยายน พ.ศ. 2568

PR-AUC

You can use this plot to make an educated decision when it comes to the classic precision/recall dilemma. Obviously, the higher the recall, the lower the precision. Knowing at which recall your precision starts to fall fast can help you choose the threshold and deliver a better model.









https://neptune.ai/blog/f1-score-accuracy-roc-auc-pr-auc

The precision recall curve is a handy plot to showcase the relationship and tradeoff between precision recall values as we adjust the decision threshold of the classifier. What is the decision threshold? The decision threshold, also called the classification threshold, is a cutoff point used in binary classification to convert the probability score output by a machine learning model into a final class prediction (positive or negative). Most binary classification models (like logistic regression) output a probability between 0 and 1 that an instance belongs to the positive class. The decision threshold determines which probability values map to which class: If the predicted probability is greater than or equal to the threshold, the instance is classified as the positive class. If the predicted probability is less than the threshold, the instance is classified as the negative class. How it Works By default, the threshold is often set to 0.5. A probability of \ge 0.5 \rightarrow Positive Class A probability of < 0.5 \rightarrow Negative Class However, this default isn't always optimal. The threshold is a hyperparameter that can be tuned to balance the trade-off between precision and recall, which is what the precision-recall curve helps to visualize. Threshold and Precision/Recall Trade-off Adjusting the decision threshold directly impacts the number of false positives (FP) and false negatives (FN), which in turn changes the precision and recall values.

A higher AUC-PR value signifies better performance, with a maximum value of 1 indicating perfect precision and recall trade-off. https://www.superannotate.com/blog/mean-average-precision-and-its-uses-in-object-detection


วันอังคารที่ 9 กันยายน พ.ศ. 2568

Agentic AI vs AI Agent

The primary difference is that AI Agents are individual tools that execute pre-defined tasks with limited autonomy, while Agentic AI is a broader concept representing the use of autonomous systems that can independently set goals, make real-time decisions, adapt, and collaborate to solve complex, dynamic problems. Think of AI agents as specific tools or employees, and agentic AI as the system or project manager coordinating them to achieve a larger, more complex goal.  

Stochastic Gradient Descent

  • Gradient Descent (Batch): You take a step in the steepest downhill direction. To find the steepest direction, you have to survey the slope of the entire landscape (the entire dataset) before taking each single step. This is accurate but very slow if the landscape is vast (a huge dataset).Stochastic 
  • Gradient Descent (SGD): Instead of surveying the entire landscape, you just pick one random spot on the landscape and measure the slope there. You then take a small step in that single spot's steepest downhill direction. You repeat this process many times, picking a new random spot for each step.

วันอาทิตย์ที่ 7 กันยายน พ.ศ. 2568

Docker getting started for Windows Desktop (not Windows Server)

Terminology (https://www.docker.com/blog/docker-for-web-developers/)

  • Docker Hub: The world’s largest repository of container images, which helps developers and open source contributors find, use, and share their Docker-inspired container images.
  • Docker Compose: A tool for defining and running multi-container applications.
  • Docker Engine: An open source containerization technology for building and containerizing applications.
  • Docker Desktop: Includes the Docker Engine and other open source components; proprietary components; and features such as an intuitive GUI, synchronized file shares, access to cloud resources, debugging features, native host integration, governance, and security features that support Enhanced Container Isolation (ECI), air-gapped containers, and administrative settings management.
  • Docker Build Cloud: A Docker service that lets developers build their container images on a cloud infrastructure that ensures fast builds anywhere for all team members. 

My successful experiment.

1.Download Docker desktop for Windows

https://www.docker.com/products/docker-desktop/

2.Install it

3.You may sign up or skip

4.Run docker.desktop. It shows in the icon tray. 

5.You may be asked to run command wsl --update in cmd to update Windows subsystem for Linux (WSL) then click restart to restart docker engine

6.Create a folder namely "getting-started-docker" anywhere.

7.Within the created folder, create 2 files to get an HTTP server based on Nginx run on your Windows.

Dockerfile

# Use the official Nginx image from Docker Hub

FROM nginx:latest

# Copy the custom index.html file into the Nginx directory

COPY index.html /usr/share/nginx/html/index.html

#If there are multiple files of a web app you want to deploy, use one of these commands instead:

#COPY . /usr/share/nginx/html/ to copy entire current directory

#COPY *.js /usr/share/nginx/html/ to copy all js files

#COPY index.html styles.css scripts/app.js /usr/share/nginx/html/ to copy specified files

index.html

<!DOCTYPE html>

<html>

<head>

<title>Hello, World!</title>

</head>

<body>

<h1>Hello, World!</h1>

<p>This page is served by Nginx in a Docker container.</p>

</body>

</html>

8.Open cmd window. Change directory into getting-started-docker

9.Run the following command to build your container image, which extends nginx image from Docker hub by adding my index.html. The -t specifies the container's name. The . specifies the current directory as a build context.

docker build -t my-nginx-webserver .

10.Start my container by running my docker image.

docker run -d -p 8080:80 --name my-nginx-container my-nginx-webserver

The -d flag runs the container in "detached" mode (in the background).

The -p flag maps port 8080 on your local machine to port 80 inside the container, which is the default port Nginx listens on.

The --name my-nginx-container gives your container a memorable name.

Finally, you specify the name of the image you want to use (my-nginx-webserver).

If you want your container to auto start upon Windows restart, do 2 more steps:

1) Docker desktop>Setting>tick Start Docker Desktop when you sign in to your computer

2) Run docker from within getting-started-docker folder by using this command instead: 

docker run --restart unless-stopped -d -p 8080:80 --name my-nginx-container my-nginx-webserver

11.Open http://localhost:8080 in your web browser to verify if it works.

12.Stop the container with this command:

docker stop my-nginx-container

13.Or you may start it again with this command:

docker start my-nginx-container

14.Remove your container after stopping it:

docker rm my-nginx-container

15.You may remove the image:

docker rmi my-nginx-webserver

วันอาทิตย์ที่ 31 สิงหาคม พ.ศ. 2568

Model overfit

 Overfitting is a common problem in machine learning where a model learns the training data too well, including its noise and random fluctuations, to the point that it fails to make accurate predictions on new, unseen data. It's like a student who memorizes test answers without understanding the underlying concepts; they do well on the practice test (training data) but struggle on the real exam (new data). 🧠

An overfit model has high variance and low bias, meaning it is highly sensitive to the training data and performs poorly when given new information. This contrasts with an underfit model, which is too simple to capture the underlying patterns and performs poorly on both training and new data.

How to Detect and Prevent Overfitting

Detecting overfitting often involves monitoring the model's performance on both a training dataset and a separate validation dataset. A key indicator is when the model's performance on the training data continues to improve (e.g., a decrease in error) while its performance on the validation data begins to worsen.

Here are some common strategies to prevent overfitting:

Use More Data: One of the most effective ways to prevent overfitting is to increase the amount of training data. A larger, more diverse dataset helps the model learn the true patterns rather than memorizing random noise.

Simplify the Model: If a model is too complex for the given data, it's more likely to overfit. You can reduce complexity by using a simpler algorithm or by reducing the number of parameters or features.

Regularization: This technique adds a penalty to the model's loss function based on its complexity. This discourages the model from assigning too much importance to specific features and helps prevent it from becoming overly complex. E.g. L1 Regularization (Lasso)

Early Stopping: During the training process, you can monitor the model's performance on the validation set. If the validation error starts to increase, you can stop the training process early to prevent overfitting.

Cross-Validation: This method involves splitting the data into multiple subsets, or "folds." The model is trained and tested on different combinations of these folds, which helps ensure it's not performing well on just one specific data split.

Dropout Primarily used in neural networks, dropout is a different kind of regularization. During each training iteration, it randomly "drops" a percentage of neurons by temporarily ignoring them. This prevents neurons from becoming too co-dependent and forces the network to learn more robust and generalizable patterns. Early Stopping This technique involves monitoring the model's performance on a separate validation dataset during training. When the performance on the validation set stops improving or begins to get worse, you stop the training process early. This prevents the model from continuing to learn the noise in the training data, which would lead to overfitting. 

--Gemini 

วันเสาร์ที่ 23 สิงหาคม พ.ศ. 2568

Stat vs Math

Math is about discovering and proving truths that are universally valid.

Stat is about drawing conclusions from data, often with uncertainty.

AspectMathematicsStatistics
FocusAbstract concepts, patterns, structuresData collection, analysis, interpretation
NatureDeductive reasoning (from theory to result)Inductive reasoning (from data to inference)
PurposeTo develop theories and solve equationsTo make decisions or predictions based on data
Core ActivitiesProving theorems, solving equationsEstimating, testing hypotheses, modeling data
Key TopicsAlgebra, calculus, geometry, number theoryProbability, sampling, regression, inference

วันอังคารที่ 19 สิงหาคม พ.ศ. 2568

วันเสาร์ที่ 16 สิงหาคม พ.ศ. 2568

วันพุธที่ 6 สิงหาคม พ.ศ. 2568

สรรพนามบุรุษที่สองสำหรับตำรวจและทหารบก

ยศทหารบก

  • นายสิบ/ชั้นประทวน:

    • สิบตรี, สิบโท, สิบเอก: นิยมเรียกกันทั่วไปว่า "หมู่"

    • จ่าสิบตรี, จ่าสิบโท, จ่าสิบเอก: นิยมเรียกกันว่า "จ่า"

    • จ่าสิบเอก (อัตราเงินเดือนสูงขึ้น): จะเรียกว่า "จ่าพิเศษ"

  • นายทหารสัญญาบัตร:

    • ร้อยตรี, ร้อยโท: เรียกกันว่า "ผู้หมวด"

    • ร้อยเอก: เรียกกันว่า "ผู้กอง"

    • พันตรี, พันโท, พันเอก: เรียกกันว่า "ผู้พัน"

    • พันเอกพิเศษ, พลตรี, พลโท, พลเอก: เรียกกันว่า "นายพล"

    • ตำแหน่งผู้บังคับการกรมขึ้นไป: จะนิยมเรียกกันว่า "ผู้การ"

ยศตำรวจ

  • ชั้นประทวน:

    • สิบตำรวจตรี, สิบตำรวจโท, สิบตำรวจเอก: เรียกกันว่า "หมู่"

    • จ่าสิบตำรวจ: เรียกกันว่า "จ่า"

    • ดาบตำรวจ: เรียกกันว่า "ดาบ"

  • ชั้นสัญญาบัตร:

    • ร้อยตำรวจตรี, ร้อยตำรวจโท: เรียกกันว่า "ผู้หมวด"

    • ร้อยตำรวจเอก: เรียกกันว่า "ผู้กอง"

    • พันตำรวจตรี, พันตำรวจโท, พันตำรวจเอก: เรียกกันว่า "ผู้พัน"

    • พลตำรวจตรี, พลตำรวจโท, พลตำรวจเอก: ตำแหน่งนี้จะเรียกกันว่า "นายพล"

    • ตำแหน่งผู้บังคับการกองบังคับการขึ้นไป (ตั้งแต่ พล.ต.ต. ขึ้นไป): นิยมเรียกกันว่า "ผู้การ"

    • สารวัตร: เป็นชื่อตำแหน่ง ไม่ใช่ชื่อยศ โดยสารวัตรส่วนใหญ่จะเป็นยศ พันตำรวจตรี

วันอังคารที่ 5 สิงหาคม พ.ศ. 2568

Web service runs on application server

 A RESTful web service is a type of web application, and its core business logic, where the "work" of the service is performed, runs on an application server.

Here's why and how it fits into the web server and application server model:

 * RESTful Services and Dynamic Content: A RESTful web service is designed to provide dynamic data, often in formats like JSON or XML, in response to requests. This is the very definition of dynamic content, which is the application server's main purpose. A static HTML file doesn't need to be generated—it's just a file. But a request for a RESTful endpoint like /users/123 needs to trigger code that queries a database, formats the user's data, and returns it as a JSON object. This logic runs on the application server.

 * The Web Server's Supporting Role: While the RESTful service code runs on the application server, a web server is still typically used in front of it. In this scenario, the web server's job is not to serve the RESTful data directly. Instead, it acts as a smart proxy:

   * It handles the incoming HTTP requests from clients.

   * It forwards requests for the RESTful endpoints to the application server.

   * It can also perform tasks like load balancing (distributing requests across multiple application servers) and SSL termination (handling the encrypted connection so the application server doesn't have to).

 * Example:

   * A client sends a GET request to https://api.example.com/users/123.

   * This request first hits a web server (e.g., NGINX).

   * The web server is configured to recognize that requests to the /users path should be routed to a specific application server.

   * The application server (e.g., a Node.js server or a Java servlet container like Tomcat) receives the request.

   * It executes the code for the users endpoint, which likely performs a database query to find the user with ID 123.

   * The application server then formats the data into a JSON response.

   * It sends the JSON response back to the web server.

   * The web server sends the final JSON response back to the client.

In summary, the RESTful web service itself—the code that defines the API, handles requests, and provides responses—is executed on the application server. The web server serves as a crucial component of the overall infrastructure, providing an efficient and secure gateway to the application server'

s functionality.

วันพฤหัสบดีที่ 31 กรกฎาคม พ.ศ. 2568

วันพุธที่ 16 กรกฎาคม พ.ศ. 2568

Bottle levels for Broadband filters

 Broadband filters, often referred to as "light pollution reduction" (LPR) or "CLS" (City Light Suppression) filters, are designed to block specific wavelengths of light commonly associated with artificial light sources (like sodium vapor and mercury vapor lamps) while allowing most of the visible spectrum, including the light from broadband celestial objects, to pass through.

Here's a breakdown of their suitability across the Bortle Scale:

Bortle Scale 1-4 (Dark to Rural/Suburban Transition):

 * Generally NOT recommended or necessary. If you're lucky enough to be in truly dark or moderately dark skies, a broadband filter will often do more harm than good.

   * Loss of Signal: Broadband filters block some light from the desired celestial object, as they are cutting out parts of the spectrum. In dark skies, the benefit of light pollution reduction is minimal, and the loss of natural light from your target can actually reduce the overall signal-to-noise ratio.

   * Color Shift: They can introduce a slight color cast, making color calibration more challenging.

   * Dimming: They will dim the overall view, which is counterproductive in dark skies where you want to gather as much light as possible.

 * Exception: Some astrophotographers might use a mild broadband filter (like an Optolong L-Pro) in Bortle 4 or even 3 if they are trying to specifically combat residual light pollution from a distant city glow on the horizon, or to slightly enhance contrast on some objects. However, for most broadband targets, no filter is often the best choice in these conditions.

Bortle Scale 5-7 (Suburban to Urban Transition):

 * Where they are most effective and commonly used. This is the "sweet spot" for broadband filters.

   * Light Pollution Reduction: In these areas, there's a significant amount of light pollution from various sources. Broadband filters help to filter out the common culprits (sodium, mercury vapor) making a noticeable difference in reducing sky glow and improving contrast for broadband targets.

   * Suitable for Galaxies and Star Clusters: These filters allow enough of the broad spectrum light from galaxies, star clusters, and reflection nebulae to pass through, making them viable targets even from these moderately light-polluted locations.

   * More Natural Colors: Compared to narrowband or dual-band filters, broadband filters generally allow for more natural-looking star colors.

 * Examples: Many popular broadband filters like the Optolong L-Pro, Astronomik CLS, or similar are designed for these conditions.

Bortle Scale 8-9 (City Sky to Inner-City Sky):

 * Limited effectiveness, often less beneficial than dual-band/narrowband filters.

   * Newer LED Light Pollution: Modern LED streetlights emit a much broader spectrum of light, which broadband filters struggle to block effectively without also blocking significant amounts of desired light from your celestial target. This makes them less effective against contemporary light pollution.

   * Overwhelmed Signal: In extremely light-polluted areas, the sky glow can be so intense that even a broadband filter can't sufficiently reduce it to make fainter broadband targets (like galaxies) stand out. The signal from these objects is simply too overwhelmed by the background.

   * Better for Specific Targets: For Bortle 8-9, if you want to image, your best bet for most objects is to focus on emission nebulae using dual-band or narrowband filters. These filters are far more aggressive at isolating specific wavelengths, allowing you to cut through extreme light pollution to capture objects that emit light predominantly in those narrow bands (like H-alpha and O-III).

   * Processing is Key: Even with a broadband filter in these conditions, significant post-processing (gradient removal, noise reduction) will be essential to salvage an image. Some astrophotographers even argue that for galaxies in Bortle 8-9, it's sometimes better to shoot without a filter and rely solely on aggressive processing tools like PixInsight's DynamicBackgroundExtraction or Siril's background extraction, as a filter might remove too much valuable signal.

In summary:

 * Bortle 1-4: Generally no filter is best for broadband targets.

 * Bortle 5-7: Broadband filters are highly recommended and effective for imaging broadband targets (galaxies, star clusters, reflection nebulae) and for general light pollution reduction.

 * Bortle 8-9: Broadband filters have limited effectiveness, especially against modern LED light pollution. Dual-band or narrowband filters are usually preferred for imaging emission nebulae from these locations, while broadband targets remain extremely challenging.

If you are in Bangkok (likely Bortle 8 or 9 in most areas), a broadband filter might offer a slight improvement for some targets, but you will likely find dual-band or narrowband filters for nebulae to be much more impactful for imaging. For galaxies, battling the light pollution will be a significant challenge regardless of the filter, and often relies heavily on integration time and advanced post-processing.

ลำดับการใช้ทรัพย์

 ใช้ทรัพย์เลี้ยงดูตนเองมารดาบิดาบุตรภรรยาข้าทาสบริวารและสงฆ์ตามลำดับ

วันเสาร์ที่ 12 กรกฎาคม พ.ศ. 2568

Open LLM models

Llama

https://www.llama.com/

Mistral 

https://mistral.ai/models

Etc:

LLaMA 2 7B / 13B / 70Bparams Meta Custom (non-commercial for some) Powerful, widely used. Available via Hugging Face.

LLaMA 3 8B / 70B Meta Custom (open-weight) Newer, more capable than LLaMA 2. May have commercial restrictions.

Mistral 7B 7B Mistral AI Apache 2.0 Fast, strong performance. Supports multi-query attention.

Mixtral 8x7B ~12.9B active Mistral AI Apache 2.0 Sparse MoE (uses 2 of 8 experts). High performance.

Phi-2 / Phi-3 2.7B / 3.8B+ Microsoft MIT / Open Small but very efficient. Good for on-device.

Gemma 2 2B / 7B Google DeepMind Apache 2.0 Lightweight, efficient, for research & commercial use.

Command R / R+ 7B+ Cohere Apache 2.0 Fine-tuned for RAG (retrieval-augmented generation).

OpenHermes 2.5 / 2.5-Mistral 7B Teknium Open (depends on base) Popular open-instruct models built on Mistral.

Yi-34B 34B 01.AI Open (restrictions may apply) High-performance model from China.

Dolphin 2.7 7B Cognitive Computations Open Strong performance; instruction-tuned.

StableLM Zephyr 3B / 7B Stability AI Open Aligned with RLHF, chat-tuned.

Pythia 70M–12B EleutherAI Apache 2.0 Designed for transparency & research.

RedPajama 3B / 7B Together / Hazy Research Apache 2.0 Full-stack dataset + model project.

Falcon 7B / 40B TII (UAE) Apache 2.0 (7B), custom (40B) Early open model; still useful.

Where to Use or Download Them

Hugging Face – Most are hosted here with easy-to-use APIs.

Ollama – Run models locally with one command (supports LLaMA 2/3, Mistral, etc.).

LMStudio – GUI for running open LLMs locally on Mac/Windows.

Replicate – Run open models via web APIs.

GPT4All – Desktop apps and models optimized for offline use.

---

🧠 Tips When Choosing a Model

Use Mistral 7B or Mixtral 8x7B for high-quality, efficient chat or RAG apps.


Use Phi-3 or Gemma 2B for on-device or low-resource environments.


Use LLaMA 3 (8B or 70B) if you want Meta’s best open-weight models for research.

List of GenAI

chatgpt.com

https://claude.ai


https://gemini.google.com/app

grok.com

https://chat.deepseek.com

https://www.meta.ai/

https://www.perplexity.ai/


storm.genie.stanford.edu (Create article)

https://cursor.com/ (AI code editor)

https://www.airtable.com/


Above are grouped by the similarity of answers to the question "Does gradient descent has its own algorithm independent of descent direction iteration and line search algorithm?"