User contributions for Thakshashila
Results for Thakshashila talk block log uploads logs
A user with 264 edits. Account created on 11 April 2025.
12 June 2025
- 11:5011:50, 12 June 2025 diff hist +2,653 N Chemical Thermodynamics Created page with "== Chemical Thermodynamics == '''Chemical Thermodynamics''' is the branch of thermodynamics that studies the interrelation of heat and work with chemical reactions or physical changes of state within chemical systems. It provides the framework to predict whether a reaction will occur spontaneously and to what extent it proceeds. === Basic Concepts === Chemical thermodynamics deals with the energy changes and equilibrium conditions in chemical reactions, focusing on va..." current
- 11:4911:49, 12 June 2025 diff hist +2,526 N Entropy Created page with "== Entropy == '''Entropy''' (symbol <math>S</math>) is a fundamental thermodynamic property that measures the degree of disorder or randomness in a system. It quantifies the number of microscopic configurations that correspond to a thermodynamic system's macroscopic state. === Definition === Entropy is related to the number of possible microstates (<math>\Omega</math>) by the Boltzmann equation: <math> S = k_B \ln \Omega </math> where: * <math>S</math> = entropy..." current
- 11:4911:49, 12 June 2025 diff hist +2,990 N Phase Equilibrium Created page with "== Phase Equilibrium == '''Phase equilibrium''' refers to the condition where multiple phases of a substance coexist in equilibrium without any net change in their amounts over time. It occurs when the chemical potential of each component is the same in all coexisting phases, ensuring no driving force for phase change. === Basics === In a system involving different phases (solid, liquid, gas), phase equilibrium is established when the rates of phase transitions (such a..." current
- 11:4811:48, 12 June 2025 diff hist +2,941 N Thermodynamic Potential Created page with "== Thermodynamic Potential == '''Thermodynamic potentials''' are scalar quantities used in thermodynamics to describe the equilibrium and spontaneous behavior of physical systems. They are functions of state variables such as temperature, pressure, volume, and entropy, and provide criteria for spontaneous processes and equilibrium under different constraints. === Overview === Thermodynamic potentials combine the system's internal energy with other thermodynamic paramet..." current
- 11:4711:47, 12 June 2025 diff hist +2,466 N Gibbs Free Energy Created page with "== Gibbs Free Energy == '''Gibbs Free Energy''' (denoted as <math>G</math>) is a thermodynamic potential that measures the maximum reversible work a thermodynamic system can perform at constant temperature and pressure. It is an important concept in chemistry and physics, used to predict the spontaneity of chemical reactions and phase changes. === Definition === Gibbs Free Energy is defined as: <math>G = H - TS</math> where: * <math>G</math> = Gibbs free energy *..." current
11 June 2025
- 11:4411:44, 11 June 2025 diff hist +3,251 N Convolutional Neural Network Created page with "== Convolutional Neural Networks (CNNs) == A '''Convolutional Neural Network (CNN)''' is a type of deep learning model specially designed for working with '''image data''' π·. CNNs are widely used in computer vision tasks like image classification, object detection, and face recognition. === π§ Why CNNs for Images? === Images are large (millions of pixels), and fully connected neural networks don't scale well with size. CNNs solve this by using convolution operati..." current
- 11:1211:12, 11 June 2025 diff hist +3,012 N Backpropagation Created page with "== Backpropagation == '''Backpropagation''' (short for "backward propagation of errors") is a fundamental algorithm used to train neural networks. It calculates how much each weight in the network contributed to the total error and updates them to reduce this error. === π§ Purpose === The main goal of backpropagation is to: * Minimize the '''loss function''' (error) π * Improve model accuracy over time by adjusting weights π§ === π How It Works (Step-by-Ste..." current
- 10:0910:09, 11 June 2025 diff hist 0 Exploding Gradient Problem βπ See Also current
- 10:0910:09, 11 June 2025 diff hist +2,974 N Exploding Gradient Problem Created page with "== Exploding Gradient Problem == The '''Exploding Gradient Problem''' is a common issue in training deep neural networks where the gradients grow too large during backpropagation. This leads to very large weight updates, making the model unstable or completely unusable. === π What Are Gradients? === Gradients are computed during the backpropagation step of training. They help the model understand how to change its weights to reduce error. :<math> \text{Gradient} =..."
- 10:0610:06, 11 June 2025 diff hist +2,942 N Vanishing gradient problem Created page with "== Vanishing Gradient Problem == The '''Vanishing Gradient Problem''' is a common issue encountered during the training of deep neural networks. It occurs when the gradients (used to update weights) become extremely small, effectively preventing the network from learning. === π§ What is a Gradient? === In neural networks, gradients are values calculated during '''backpropagation'''. They show how much the model's weights should change to reduce the loss (error). The..." current
- 09:0609:06, 11 June 2025 diff hist +519 N Example of ReLU Activation Function Created page with "== ReLU (Rectified Linear Unit) Example == The ReLU function is defined as: :<math>f(x) = \max(0, x)</math> This means: * If ''x'' is '''positive''', it stays the same. * If ''x'' is '''negative''', it becomes ''0''. === Real Number Examples === {| class="wikitable" ! Input (x) ! ReLU Output f(x) |- | -3 | 0 |- | -1 | 0 |- | 0 | 0 |- | 2 | 2 |- | 5 | 5 |} In this table: * Negative numbers become 0 π« * Positive numbers pass through β This makes ReLU very fast..." current
10 June 2025
- 09:4509:45, 10 June 2025 diff hist β42 Main Page No edit summary current
- 09:3309:33, 10 June 2025 diff hist β57 Main Page No edit summary
- 09:3309:33, 10 June 2025 diff hist +3,450 Main Page No edit summary
- 09:2909:29, 10 June 2025 diff hist +221 Main Page No edit summary
- 09:2409:24, 10 June 2025 diff hist +879 Main Page βπ Categories
- 09:2109:21, 10 June 2025 diff hist β388 Main Page βπ Categories Tag: Manual revert
- 09:2109:21, 10 June 2025 diff hist +388 Main Page βπ Categories Tag: Reverted
- 09:1809:18, 10 June 2025 diff hist β1 Main Page βπ§ Featured Articles
- 09:1709:17, 10 June 2025 diff hist +34 Main Page βπ§ Featured Articles
- 09:1309:13, 10 June 2025 diff hist +714 Main Page βποΈ All Pages
- 09:0509:05, 10 June 2025 diff hist +796 Main Page βπ Statistics
- 09:0409:04, 10 June 2025 diff hist β332 Main Page βWelcome to Qbase
- 09:0409:04, 10 June 2025 diff hist β887 Main Page No edit summary
- 09:0309:03, 10 June 2025 diff hist β405 Main Page No edit summary
- 09:0009:00, 10 June 2025 diff hist β1,443 Main Page βWelcome to Qbase
- 08:5908:59, 10 June 2025 diff hist +2,979 Main Page βπ Our Mission
- 08:5708:57, 10 June 2025 diff hist +438 Main Page βWelcome to Qbase
- 06:4306:43, 10 June 2025 diff hist 0 Accuracy βLimitations of Accuracy current
- 06:4206:42, 10 June 2025 diff hist +2 Accuracy βLimitations of Accuracy
- 06:4206:42, 10 June 2025 diff hist 0 Accuracy βLimitations of Accuracy
- 06:3506:35, 10 June 2025 diff hist +2,685 N Gradient Descent Created page with "= Gradient Descent = '''Gradient Descent''' is an optimization algorithm used in machine learning and deep learning to minimize the cost (loss) function by iteratively updating model parameters in the direction of steepest descent, i.e., the negative gradient. == What is Gradient Descent? == Gradient Descent helps find the best-fit parameters (like weights in a neural network or coefficients in regression) that minimize the error between predicted and actual values. I..." current
- 06:3406:34, 10 June 2025 diff hist +2,564 N Normalization (Machine Learning) Created page with "= Normalization (Machine Learning) = '''Normalization''' in machine learning is a data preprocessing technique used to scale input features so they fall within a similar range, typically between 0 and 1. This helps improve model performance, especially for algorithms sensitive to the scale of data. == Why Normalize Data? == Some machine learning algorithms (e.g., K-Nearest Neighbors, Gradient Descent-based models, Neural Networks) perform better when input features ar..." current
- 06:2606:26, 10 June 2025 diff hist +37 Weighted F1 βSEO Keywords current
- 06:2606:26, 10 June 2025 diff hist +37 Unsupervised Learning βSEO Keywords current
- 06:2506:25, 10 June 2025 diff hist +37 Underfitting βSEO Keywords current
- 06:2506:25, 10 June 2025 diff hist +37 Train-Test Split βSEO Keywords current
- 06:2506:25, 10 June 2025 diff hist +37 Threshold Tuning βSEO Keywords current
- 06:2506:25, 10 June 2025 diff hist +37 Supervised Learning βSEO Keywords current
- 06:2506:25, 10 June 2025 diff hist +37 Specificity βSEO Keywords current
- 06:2406:24, 10 June 2025 diff hist +37 Sensitivity βSEO Keywords current
- 06:2406:24, 10 June 2025 diff hist +37 Regularization βSEO Keywords current
- 06:2406:24, 10 June 2025 diff hist +37 Regression βSEO Keywords current
- 06:2406:24, 10 June 2025 diff hist +37 Recall βSEO Keywords current
- 06:2406:24, 10 June 2025 diff hist +37 ROC Curve βSEO Keywords current
- 06:2406:24, 10 June 2025 diff hist +37 Precision-Recall Curve βSEO Keywords current
- 06:2306:23, 10 June 2025 diff hist +37 Precision βSEO Keywords current
- 06:2306:23, 10 June 2025 diff hist +37 Overfitting βSEO Keywords current
- 06:2306:23, 10 June 2025 diff hist +37 Model Selection βSEO Keywords current
- 06:2306:23, 10 June 2025 diff hist +37 Model Evaluation Metrics βSEO Keywords current