<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://qbase.texpertssolutions.com/index.php?action=history&amp;feed=atom&amp;title=Cross_Validation</id>
	<title>Cross Validation - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://qbase.texpertssolutions.com/index.php?action=history&amp;feed=atom&amp;title=Cross_Validation"/>
	<link rel="alternate" type="text/html" href="https://qbase.texpertssolutions.com/index.php?title=Cross_Validation&amp;action=history"/>
	<updated>2026-05-15T12:13:55Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.43.1</generator>
	<entry>
		<id>https://qbase.texpertssolutions.com/index.php?title=Cross_Validation&amp;diff=206&amp;oldid=prev</id>
		<title>Thakshashila: /* SEO Keywords */</title>
		<link rel="alternate" type="text/html" href="https://qbase.texpertssolutions.com/index.php?title=Cross_Validation&amp;diff=206&amp;oldid=prev"/>
		<updated>2025-06-10T06:20:37Z</updated>

		<summary type="html">&lt;p&gt;&lt;span class=&quot;autocomment&quot;&gt;SEO Keywords&lt;/span&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 06:20, 10 June 2025&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l59&quot;&gt;Line 59:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 59:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;cross validation, k fold cross validation, stratified cross validation, model validation techniques, overfitting prevention, estimating model performance, machine learning model evaluation&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;cross validation, k fold cross validation, stratified cross validation, model validation techniques, overfitting prevention, estimating model performance, machine learning model evaluation&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;[[Category:Artificial Intelligence]]&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Thakshashila</name></author>
	</entry>
	<entry>
		<id>https://qbase.texpertssolutions.com/index.php?title=Cross_Validation&amp;diff=175&amp;oldid=prev</id>
		<title>Thakshashila: Created page with &quot;= Cross-Validation =  &#039;&#039;&#039;Cross-Validation&#039;&#039;&#039; is a statistical method used to estimate the performance of machine learning models on unseen data. It helps ensure that the model generalizes well and reduces the risk of overfitting.  == Why Cross-Validation? ==  When training a model, it is important to test how well it performs on data it has never seen before. Simply evaluating a model on the same data it was trained on can lead to overly optimistic results. Cross-validat...&quot;</title>
		<link rel="alternate" type="text/html" href="https://qbase.texpertssolutions.com/index.php?title=Cross_Validation&amp;diff=175&amp;oldid=prev"/>
		<updated>2025-06-10T05:36:40Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;= Cross-Validation =  &amp;#039;&amp;#039;&amp;#039;Cross-Validation&amp;#039;&amp;#039;&amp;#039; is a statistical method used to estimate the performance of machine learning models on unseen data. It helps ensure that the model generalizes well and reduces the risk of overfitting.  == Why Cross-Validation? ==  When training a model, it is important to test how well it performs on data it has never seen before. Simply evaluating a model on the same data it was trained on can lead to overly optimistic results. Cross-validat...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;= Cross-Validation =&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Cross-Validation&amp;#039;&amp;#039;&amp;#039; is a statistical method used to estimate the performance of machine learning models on unseen data. It helps ensure that the model generalizes well and reduces the risk of overfitting.&lt;br /&gt;
&lt;br /&gt;
== Why Cross-Validation? ==&lt;br /&gt;
&lt;br /&gt;
When training a model, it is important to test how well it performs on data it has never seen before. Simply evaluating a model on the same data it was trained on can lead to overly optimistic results. Cross-validation provides a more reliable estimate of model performance.&lt;br /&gt;
&lt;br /&gt;
== How Cross-Validation Works ==&lt;br /&gt;
&lt;br /&gt;
The most common method is &amp;#039;&amp;#039;&amp;#039;k-fold cross-validation&amp;#039;&amp;#039;&amp;#039;, which involves the following steps:&lt;br /&gt;
&lt;br /&gt;
# Split the dataset into &amp;#039;&amp;#039;&amp;#039;k&amp;#039;&amp;#039;&amp;#039; equal parts called &amp;#039;&amp;#039;&amp;#039;folds&amp;#039;&amp;#039;&amp;#039;.&lt;br /&gt;
# For each fold:&lt;br /&gt;
# &lt;br /&gt;
# * Use the fold as the &amp;#039;&amp;#039;&amp;#039;validation set&amp;#039;&amp;#039;&amp;#039;.&lt;br /&gt;
# * Use the remaining k-1 folds as the &amp;#039;&amp;#039;&amp;#039;training set&amp;#039;&amp;#039;&amp;#039;.&lt;br /&gt;
# Train the model on the training set and evaluate on the validation set.&lt;br /&gt;
# Calculate the performance metric (e.g., accuracy, F1 score).&lt;br /&gt;
# Average the results from all k folds to get an overall performance estimate.&lt;br /&gt;
&lt;br /&gt;
== Example of 5-Fold Cross-Validation ==&lt;br /&gt;
&lt;br /&gt;
If you have 100 data points and choose k=5:&lt;br /&gt;
&lt;br /&gt;
* The data is split into 5 parts of 20 points each.&lt;br /&gt;
* The model is trained 5 times, each time leaving out one part for validation and training on the other 80 points.&lt;br /&gt;
* The average accuracy over the 5 runs is reported.&lt;br /&gt;
&lt;br /&gt;
== Types of Cross-Validation ==&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;k-Fold Cross-Validation:&amp;#039;&amp;#039;&amp;#039; Standard method described above.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Stratified k-Fold:&amp;#039;&amp;#039;&amp;#039; Ensures each fold has roughly the same class distribution, important for imbalanced datasets.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Leave-One-Out (LOO):&amp;#039;&amp;#039;&amp;#039; Special case where k equals the number of data points. Each example is used once as validation.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Repeated Cross-Validation:&amp;#039;&amp;#039;&amp;#039; Repeat k-fold multiple times with different splits to get a more stable estimate.&lt;br /&gt;
&lt;br /&gt;
== Advantages of Cross-Validation ==&lt;br /&gt;
&lt;br /&gt;
* Provides a better measure of model performance on unseen data.&lt;br /&gt;
* Helps detect overfitting and underfitting.&lt;br /&gt;
* Efficient use of data since all data points are used for training and validation.&lt;br /&gt;
&lt;br /&gt;
== Limitations ==&lt;br /&gt;
&lt;br /&gt;
* More computationally expensive than a simple train-test split.&lt;br /&gt;
* Choice of k affects bias and variance of the estimate:&lt;br /&gt;
  * Smaller k (e.g., 5) reduces computation but may increase bias.&lt;br /&gt;
  * Larger k (e.g., 10 or LOO) gives less bias but higher variance and more computation.&lt;br /&gt;
&lt;br /&gt;
== Related Pages ==&lt;br /&gt;
&lt;br /&gt;
* [[Model Selection]]&lt;br /&gt;
* [[Overfitting]]&lt;br /&gt;
* [[Underfitting]]&lt;br /&gt;
* [[Evaluation Metrics]]&lt;br /&gt;
* [[Train-Test Split]]&lt;br /&gt;
&lt;br /&gt;
== SEO Keywords ==&lt;br /&gt;
&lt;br /&gt;
cross validation, k fold cross validation, stratified cross validation, model validation techniques, overfitting prevention, estimating model performance, machine learning model evaluation&lt;/div&gt;</summary>
		<author><name>Thakshashila</name></author>
	</entry>
</feed>