วันศุกร์ที่ 18 กรกฎาคม พ.ศ. 2568
วันพุธที่ 16 กรกฎาคม พ.ศ. 2568
Bottle levels for Broadband filters
Broadband filters, often referred to as "light pollution reduction" (LPR) or "CLS" (City Light Suppression) filters, are designed to block specific wavelengths of light commonly associated with artificial light sources (like sodium vapor and mercury vapor lamps) while allowing most of the visible spectrum, including the light from broadband celestial objects, to pass through.
Here's a breakdown of their suitability across the Bortle Scale:
Bortle Scale 1-4 (Dark to Rural/Suburban Transition):
* Generally NOT recommended or necessary. If you're lucky enough to be in truly dark or moderately dark skies, a broadband filter will often do more harm than good.
* Loss of Signal: Broadband filters block some light from the desired celestial object, as they are cutting out parts of the spectrum. In dark skies, the benefit of light pollution reduction is minimal, and the loss of natural light from your target can actually reduce the overall signal-to-noise ratio.
* Color Shift: They can introduce a slight color cast, making color calibration more challenging.
* Dimming: They will dim the overall view, which is counterproductive in dark skies where you want to gather as much light as possible.
* Exception: Some astrophotographers might use a mild broadband filter (like an Optolong L-Pro) in Bortle 4 or even 3 if they are trying to specifically combat residual light pollution from a distant city glow on the horizon, or to slightly enhance contrast on some objects. However, for most broadband targets, no filter is often the best choice in these conditions.
Bortle Scale 5-7 (Suburban to Urban Transition):
* Where they are most effective and commonly used. This is the "sweet spot" for broadband filters.
* Light Pollution Reduction: In these areas, there's a significant amount of light pollution from various sources. Broadband filters help to filter out the common culprits (sodium, mercury vapor) making a noticeable difference in reducing sky glow and improving contrast for broadband targets.
* Suitable for Galaxies and Star Clusters: These filters allow enough of the broad spectrum light from galaxies, star clusters, and reflection nebulae to pass through, making them viable targets even from these moderately light-polluted locations.
* More Natural Colors: Compared to narrowband or dual-band filters, broadband filters generally allow for more natural-looking star colors.
* Examples: Many popular broadband filters like the Optolong L-Pro, Astronomik CLS, or similar are designed for these conditions.
Bortle Scale 8-9 (City Sky to Inner-City Sky):
* Limited effectiveness, often less beneficial than dual-band/narrowband filters.
* Newer LED Light Pollution: Modern LED streetlights emit a much broader spectrum of light, which broadband filters struggle to block effectively without also blocking significant amounts of desired light from your celestial target. This makes them less effective against contemporary light pollution.
* Overwhelmed Signal: In extremely light-polluted areas, the sky glow can be so intense that even a broadband filter can't sufficiently reduce it to make fainter broadband targets (like galaxies) stand out. The signal from these objects is simply too overwhelmed by the background.
* Better for Specific Targets: For Bortle 8-9, if you want to image, your best bet for most objects is to focus on emission nebulae using dual-band or narrowband filters. These filters are far more aggressive at isolating specific wavelengths, allowing you to cut through extreme light pollution to capture objects that emit light predominantly in those narrow bands (like H-alpha and O-III).
* Processing is Key: Even with a broadband filter in these conditions, significant post-processing (gradient removal, noise reduction) will be essential to salvage an image. Some astrophotographers even argue that for galaxies in Bortle 8-9, it's sometimes better to shoot without a filter and rely solely on aggressive processing tools like PixInsight's DynamicBackgroundExtraction or Siril's background extraction, as a filter might remove too much valuable signal.
In summary:
* Bortle 1-4: Generally no filter is best for broadband targets.
* Bortle 5-7: Broadband filters are highly recommended and effective for imaging broadband targets (galaxies, star clusters, reflection nebulae) and for general light pollution reduction.
* Bortle 8-9: Broadband filters have limited effectiveness, especially against modern LED light pollution. Dual-band or narrowband filters are usually preferred for imaging emission nebulae from these locations, while broadband targets remain extremely challenging.
If you are in Bangkok (likely Bortle 8 or 9 in most areas), a broadband filter might offer a slight improvement for some targets, but you will likely find dual-band or narrowband filters for nebulae to be much more impactful for imaging. For galaxies, battling the light pollution will be a significant challenge regardless of the filter, and often relies heavily on integration time and advanced post-processing.
วันเสาร์ที่ 12 กรกฎาคม พ.ศ. 2568
Open LLM models
Llama
https://www.llama.com/
Mistral
https://mistral.ai/models
Etc:
LLaMA 2 7B / 13B / 70Bparams Meta Custom (non-commercial for some) Powerful, widely used. Available via Hugging Face.
LLaMA 3 8B / 70B Meta Custom (open-weight) Newer, more capable than LLaMA 2. May have commercial restrictions.
Mistral 7B 7B Mistral AI Apache 2.0 Fast, strong performance. Supports multi-query attention.
Mixtral 8x7B ~12.9B active Mistral AI Apache 2.0 Sparse MoE (uses 2 of 8 experts). High performance.
Phi-2 / Phi-3 2.7B / 3.8B+ Microsoft MIT / Open Small but very efficient. Good for on-device.
Gemma 2 2B / 7B Google DeepMind Apache 2.0 Lightweight, efficient, for research & commercial use.
Command R / R+ 7B+ Cohere Apache 2.0 Fine-tuned for RAG (retrieval-augmented generation).
OpenHermes 2.5 / 2.5-Mistral 7B Teknium Open (depends on base) Popular open-instruct models built on Mistral.
Yi-34B 34B 01.AI Open (restrictions may apply) High-performance model from China.
Dolphin 2.7 7B Cognitive Computations Open Strong performance; instruction-tuned.
StableLM Zephyr 3B / 7B Stability AI Open Aligned with RLHF, chat-tuned.
Pythia 70M–12B EleutherAI Apache 2.0 Designed for transparency & research.
RedPajama 3B / 7B Together / Hazy Research Apache 2.0 Full-stack dataset + model project.
Falcon 7B / 40B TII (UAE) Apache 2.0 (7B), custom (40B) Early open model; still useful.
Where to Use or Download Them
Hugging Face – Most are hosted here with easy-to-use APIs.
Ollama – Run models locally with one command (supports LLaMA 2/3, Mistral, etc.).
LMStudio – GUI for running open LLMs locally on Mac/Windows.
Replicate – Run open models via web APIs.
GPT4All – Desktop apps and models optimized for offline use.
---
🧠 Tips When Choosing a Model
Use Mistral 7B or Mixtral 8x7B for high-quality, efficient chat or RAG apps.
Use Phi-3 or Gemma 2B for on-device or low-resource environments.
Use LLaMA 3 (8B or 70B) if you want Meta’s best open-weight models for research.
List of GenAI
grok.com
chatgpt.com
https://gemini.google.com/app
https://claude.ai
https://chat.deepseek.com
https://www.meta.ai/
https://www.perplexity.ai/
storm.genie.stanford.edu
https://cursor.com/
https://www.airtable.com/
วันพฤหัสบดีที่ 10 กรกฎาคม พ.ศ. 2568
วันพุธที่ 9 กรกฎาคม พ.ศ. 2568
Is a reasoning model an LLM?
Yes, a "reasoning model" often refers to a type of Large Language Model (LLM) that has been specifically developed or fine-tuned to excel at complex, multi-step reasoning tasks.
Here's a breakdown:
* Large Language Models (LLMs): These are powerful AI models trained on vast amounts of text data. They are excellent at understanding and generating human-like text, performing tasks like summarization, translation, and basic question answering.
* Reasoning Models (or Reasoning LLMs): While general LLMs can perform some basic reasoning, "reasoning models" are a specialization. They are designed to break down complex problems (like puzzles, advanced math, or coding challenges) into smaller, manageable steps and then logically work through them. This often involves techniques like "Chain-of-Thought" (CoT) prompting, where the model generates intermediate reasoning steps.
Essentially, a reasoning model is an LLM that has been enhanced to exhibit more robust and explicit reasoning capabilities, often by being trained with specific methods (like reinforcement learning) or prompted to "think step by step."
So, while not all LLMs are specifically "reasoning models," the most advanced reasoning models today are indeed built upon the foundation of large language mo
dels.
การใช้เงิน
ซื้อของแป๊บเดียวก็เบื่อ
ซื้อประสบการณ์จำไปจนวันตาย
คนรวยคือมี passive income (เช่นดอกเบี้ย) มากกว่ารายจ่าย
วันเสาร์ที่ 5 กรกฎาคม พ.ศ. 2568
Calculus & Algebra
Calculus studies change and accumulation. It has two main parts:
1. Differential Calculus
How things change
- Focus: Rates of change, slopes, derivatives.
- Key idea: Finds how fast something is changing at any point.
- Example: If you drive a car, differential calculus tells you your instantaneous speed at any moment (the derivative of your position with respect to time).Derivative = slope of the tangent line to a curve
- Example: If f(x) = x^2, then f’(x) = 2x, which tells how fast f(x) changes at each x
2. Integral Calculus
How things accumulate
- Focus: Areas under curves, totals, integrals.
- Key idea: Finds the total amount accumulated over time or space.
- Example: If you know your speed at each moment, integral calculus tells you the total distance you’ve traveled.Integral =
- Example: \int x^2 \, dx = \frac{1}{3}x^3 + C
Fundamental Theorem of Calculus
This theorem connects the two parts:
- Differentiation and integration are opposites.
- If you integrate a function and then differentiate the result, you get the original function back.
Algebra uses symbols (like x, y) to represent numbers and relationships. It helps us solve problems where some values are unknown.
Self supervised learning
ใช้ pseudo labels ในการ train classification model. ไม่มีการใช้ labeled data เลย ส่วน pseudo labels ถูกสร้างขึ้นมาโดยการทำ clustering of unlabeled data นั่นคือใช้ cluster id เป็น pseudo labels เรียกขั้นตอนนี้ว่า pretexting
วันพุธที่ 2 กรกฎาคม พ.ศ. 2568
harmonic mean vs arithmetic mean
The harmonic mean is the appropriate average for ratios and rates because it gives equal weight to each "event" or "unit of work" rather than each individual number or time interval. Here's a deeper dive into why:
1. The Reciprocal (i.e. 1/x) Relationship
The harmonic mean is defined as the reciprocal of the arithmetic mean of the reciprocals of the data points. This "reciprocal of the average of the reciprocals" structure is key. When you're dealing with rates (like miles per hour, or words per minute), you're essentially looking at a ratio of two quantities (distance/time, words/time).
Consider the classic example of average speed:
* Scenario 1: Equal Distances
Imagine you drive from point A to point B at 60 km/h and return from point B to point A (the same distance) at 20 km/h. What's your average speed for the entire trip?
* If you used the arithmetic mean: (60 + 20) / 2 = 40 km/h. This is incorrect.
* Let's analyze it with the harmonic mean.
* Distance (d) is constant.
* Time for outbound trip = d/60
* Time for return trip = d/20
* Total distance = 2d
* Total time = d/60 + d/20 = d(1/60 + 1/20) = d(4/60) = d/15
* Average speed = Total Distance / Total Time = 2d / (d/15) = 2 \times 15 = 30 km/h.
* Notice that the harmonic mean of 60 and 20 is: 2 / (1/60 + 1/20) = 2 / (4/60) = 2 \times 60 / 4 = 120 / 4 = 30 km/h.
In this scenario, where the distance (the "work" done) is constant for each segment, the harmonic mean gives the correct average speed. The arithmetic mean would be too high because you spend more time traveling at the slower speed. The harmonic mean inherently accounts for the longer time spent at the lower rate.
* Scenario 2: Equal Times
If you drive for 1 hour at 60 km/h and then for another 1 hour at 20 km/h, your average speed would be the arithmetic mean: (60 \times 1 + 20 \times 1) / (1 + 1) = (60+20)/2 = 40 km/h. In this case, since the time (the denominator of the rate) is constant for each segment, the arithmetic mean is appropriate.
--gemini
IEEE AI consent
"Authors understand that the use of artificial intelligence (AI)–generated content in an article (including but not limited to text, figures, images, and code) shall be disclosed in the acknowledgments section of any paper submitted to an IEEE Conference or Periodical. The sections of the paper that use AI-generated content shall have a citation to the AI system used to generate the content."
Many Photos with Short Exposures vs Fewer Photos with Longer Exposures
In astrophotography stacking, the debate between taking many photos with short durations versus fewer photos with longer durations is complex, and the "better" option often depends on various factors. Both approaches have their distinct advantages and disadvantages.
Many Photos with Short Durations (Short Exposures):
Advantages:
Mitigates Tracking Errors: Even with the best tracking mounts, minor deviations can occur. Shorter exposures minimize the impact of these errors, resulting in sharper stars and less trailing.
Reduces Overexposure: Bright objects (like bright stars or the core of some nebulae) can easily be overexposed with long exposures, leading to a loss of detail. Shorter exposures help preserve detail in high-dynamic-range objects.
Increased Flexibility and Error Tolerance: If a single short exposure is ruined by a plane, satellite trail, or sudden atmospheric turbulence, it's easier to discard that one frame without significantly impacting the overall data.
1 With many frames, you have more redundancy.Reduced Thermal Noise (for uncooled cameras): Shorter exposures mean the camera sensor heats up less, which can reduce thermal noise, especially in DSLRs or mirrorless cameras without active cooling.
Better for Fast-Moving Objects or Poor Seeing: For objects like planets or the Moon, or in conditions with turbulent atmosphere (poor "seeing"), very short exposures are crucial to "freeze" the image and capture sharp details. While deep-sky objects are much slower, shorter exposures can still help mitigate atmospheric blurring.
Easier on Mounts: Less demanding on tracking accuracy, especially for less expensive or less precisely aligned mounts.
Disadvantages:
Higher Read Noise Contribution: Each time an image is read out from the sensor, it introduces a small amount of "read noise." With many short exposures, the cumulative read noise can become more significant.
More Files and Processing Time: A large number of short exposures means more individual files to manage and process, which can be computationally intensive.
May Not Capture Faint Details: If individual short exposures are too short, the signal from very faint objects might not be strong enough to rise above the read noise in a single frame, even with stacking.
2 You need enough signal in each sub-exposure to make the stacking effective for faint targets.
Fewer Photos with Longer Durations (Long Exposures):
Advantages:
Better Signal-to-Noise Ratio (SNR) for Faint Objects: Longer exposures collect more light (signal) from faint deep-sky objects, allowing their signal to rise more prominently above the noise floor (especially read noise). This leads to clearer, smoother, and more detailed images, particularly in low-light areas.
Less Read Noise: With fewer exposures, the overall read noise contribution is reduced because it's incurred once per frame.
Less Processing Required: Fewer frames generally mean a simpler workflow and less computational demand.
Captures More Photons per Frame: This directly translates to more signal from the target, which is essential for revealing dim structures and colors.
Disadvantages:
More Susceptible to Tracking Errors: Any movement, drift, or periodic error in the mount becomes more apparent and can lead to star trailing or blurring.
Higher Risk of Overexposure: Bright stars or nebula cores can easily be blown out, losing all detail and color.
3 Less Forgiving of Mistakes: If a single long exposure is ruined by a plane, wind gust, or mis-tracking, you lose a significant amount of valuable integration time.
Increased Thermal Noise (for uncooled cameras): The sensor heats up more during longer exposures, which can increase thermal noise and hot pixels.
4 Requires More Precise Equipment: Demands a highly accurate and well-aligned equatorial mount, and often autoguiding, to prevent star trails.
Which is better?
There isn't a universally "better" option; the optimal choice often involves a balance and depends on:
Your Equipment:
Mount Accuracy: If you have an excellent, precisely aligned, and well-guiding mount, longer exposures become more feasible.
Camera Sensor: Cameras with very low read noise can benefit more from shorter exposures, as the read noise penalty is less significant. Cooled astrophotography cameras help mitigate thermal noise in longer exposures.
5
The Target:
Bright Objects: For bright nebulae or star clusters, shorter exposures can prevent saturation.
6 You might even combine different exposure lengths (HDR stacking) for objects with extreme dynamic range (e.g., Orion Nebula).7 Faint Objects: For very faint galaxies or nebulae, longer individual exposures are often preferred to ensure enough signal is captured in each frame to rise above the read noise.
8
Sky Conditions:
Light Pollution: In light-polluted skies, shorter exposures might be necessary to avoid quickly saturating the sky background.
9 Atmospheric Seeing: If the atmosphere is turbulent (poor seeing), shorter exposures can "freeze" the turbulence, resulting in sharper stars. In excellent seeing, longer exposures are less problematic.
General Rule of Thumb:
Many experienced astrophotographers aim for individual sub-exposure lengths that are long enough for the signal from the faintest details of the target to be significantly above the camera's read noise, but not so long that bright stars are saturated or tracking errors become apparent. Then, they take as many of these optimal-length exposures as possible to maximize the total integration time and further improve the signal-to-noise ratio.
In essence, stacking is always beneficial as it averages out random noise. The question is about the length of each individual exposure ("sub-exposure"). For deep-sky objects, it's generally a compromise between minimizing read noise (favors longer) and mitigating tracking errors/overexposure/hot pixels (favors shorter).