Linear independence
Let k = number of vectors
Let r=rank(A)
| Rank rrr | Condition | Linear Independence? |
|---|---|---|
| r=k | Full column rank | Independent |
| r<k | Not full rank | Dependent |
| k>n | More vectors than dimensions | Always dependent |
Consistency
A system of equations is consistent if it has at least one solution. It is inconsistent if no solution exists.
Using rank: the system is consistent when the rank of the coefficient matrix equals the rank of the augmented matrix; otherwise it is inconsistent.
Let A = coefficient matrix
Let [A∣B] = augmented matrix
Let r(A) = rank of coefficient matrix
Let r([A∣B]) = rank of augmented matrix
Case 1: Consistent System
r(A)=r([A∣B])
Then:
If this common rank = number of unknowns → unique solution
If this rank < number of unknowns → infinite solutions
Case 2: Inconsistent System
r(A)≠r([A∣B])
Meaning: the augmented matrix introduces a contradiction.
Eigen values
det(A−λI)=0 → λ = a, b
eigen vectors
(A−aI)v=0; with v=(x,y) → x+y=0⟹y=−x →
Lets choose x=1 $$v = egin{bmatrix} 1 -1 end{bmatrix}$$ also, $$egin{bmatrix} 1 -1 end{bmatrix} = egin{bmatrix} -1 1 end{bmatrix}$$ in terms of eigen vectors.
(A−bI)v=0; similar.
Maximization using lagrange's multiplier
Step 1: Form the Lagrangian
$L = 4u_1^2 + 3u_1u_2 + 6u_2^2 + lambda(56 - u_1 - u_2)$$
Step 2: Take partial derivatives and set them to zero
(i) Derivative w.r.t. $u_1$:
$$rac{partial mathcalL}{partial u_1} = 8u_1 + 3u_2 - lambda = 0$$
(ii) Derivative w.r.t. $u_2$:
$$rac{partial mathcalL}}{partial u_2} = 3u_1 + 12u_2 - lambda = 0$$
(iii) Derivative w.r.t. constraint:
$$u_1 + u_2 = 56$$
Step 3: Use the first two equations to eliminate λ and substitute into the constraint
$$oxed{u_1 = 36,quad u_2 = 20}$$
These values maximize the function under the given constraint
Cramer's rule to solve equations

Taylor's series and Maclaurin’s series expansion

Taylor's polynomial
same formular f(x) = ....

A question

Concave, Convex, Inflection
Convex flexes. Concave caves.

Hessian of a function

Jacobian

Mean value theorem

Theory
Open and Close Set
- A set is open if it does not include its boundary points.
- A set S is open if every point in S has a small neighborhood (a ball around it) that is also completely inside S.
- (2,5), R, {(x,y):x2+y2<1}, ∅ (empty set)
- A set is closed if it contains all its limit points.
- A set is closed if it contains its boundary or if its complement is open.
- [2,5], {(x,y):x2+y2≤1}, {2,7,10},
- Complement of an Open Set
- Example: The complement of the open interval (0,1) is: $(-infty, 0] cup [1, infty)$
- A set can be neither open nor closed
- [2, 5)
How is Euler’s theorem used in product exhaustion theorem ?

Advantages of Sample Survey + Explain Sampling Design (12 marks)
⭐ Advantages of Sample Survey
Lower Cost:
Sampling is cheaper than a census because fewer observations are collected.Less Time:
Results can be obtained quickly since only part of the population is studied.Greater Accuracy (in many cases):
With trained investigators and smaller data size, errors are often lower than full census.Operational Feasibility:
Some studies (e.g., destructive testing, case studies) cannot be done via census.Better Quality Control:
Supervising small samples is easier and more reliable.Useful when population is infinite:
For example, quality control in industrial production.More detailed information:
Researchers can collect high-quality data from fewer units.
⭐ Explain Sampling Design
Sampling design refers to the method and plan used to select the sample from the population.
A good sampling design includes:
Target Population:
Define clearly who or what is being studied.Sampling Frame:
A list or representation from which samples are drawn.Sampling Method:
Probability sampling: simple random, stratified, systematic, cluster.
Non-probability sampling: convenience, quota, judgement.
Sample Size:
Decide number of units based on cost, accuracy, and variability.Selection Procedure:
How units will be chosen (random numbers, systematic rule, etc.)Execution:
Collecting data according to the design while avoiding bias.
Sampling design ensures:
Representativeness
Minimization of bias
Reliability of conclusions
Sampling methods in breif
1. Probability Sampling
In probability sampling, the selection process is random, ensuring that the sample is statistically representative of the population and minimizing bias.
| Method | Explanation |
|---|---|
| Simple Random | Every individual in the population has an equal, independent chance of being selected (like drawing names out of a hat or using a random number generator). It requires a complete list of the population. |
| Stratified | The population is divided into subgroups (strata) based on shared characteristics (e.g., age, gender, location). A simple random sample is then drawn from each subgroup to ensure representation of all strata. |
| Systematic | A starting point is randomly selected, and then every nth individual is chosen from a list (e.g., selecting every 10th customer entering a store). It is simpler than simple random sampling but still offers good coverage. |
| Cluster | The population is divided into clusters (e.g., geographic areas, schools). The researcher randomly selects a few clusters, and then all individuals within the chosen clusters are sampled. It is cost-effective for large, geographically dispersed populations. |
2. Non-Probability Sampling
In non-probability sampling, the selection process is non-random and relies on the researcher's subjective judgment or convenience, which can introduce selection bias. These methods are often used in qualitative or exploratory research.
| Method | Explanation |
|---|---|
| Convenience | Individuals are selected simply because they are easily accessible or "convenient" to the researcher (e.g., surveying friends, family, or students in a specific classroom). Results are often not generalizable to the wider population. |
| Quota | Similar to stratified sampling, the researcher identifies population subgroups and sets quotas for each. However, participants within those quotas are selected using non-random methods (e.g., convenience or judgment) until the quota is filled. |
| Judgement (Purposive) | The researcher intentionally selects participants based on their specific knowledge, expertise, or characteristics that are relevant to the study's purpose. The researcher uses their "judgment" to choose the most informative participants. |
Types of Biases in Sample Survey (6 marks)
Bias = systematic error in data collection or sampling.
1. Selection Bias
Occurs when the sample is not representative of the population.
Example: surveying only urban households for national consumption.
2. Non-response Bias
Some selected units refuse to respond or cannot be contacted.
The final sample differs from intended one.
3. Response Bias
Respondents give inaccurate answers intentionally or unintentionally.
Example: understating income, overstating charitable donations.
4. Sampling Bias
Arises due to poor sampling method, e.g., convenience sampling.
5. Interviewer Bias
Interviewer influences responses through tone, wording, or behaviour.
6. Measurement Bias
Errors introduced by faulty instruments, ambiguous questions, or poor questionnaire design.
7. Recall Bias
Respondents cannot remember past events correctly (common in surveys on spending, health, etc.).
8. Processing Bias
Mistakes in coding, entering, or processing data.
Properties of a Continuous Function (4 marks)
A function f(x)f(x)f(x) is continuous at a point aaa if:
$$lim_{x o a} f(x) = f(a)$$
Key properties:

Homogeneous vs. Homothetic Functions

Examples of Different Types of Sequences
(a) Finitely Oscillatory Sequence
A sequence that oscillates only a finite number of times and then settles.
$$1,−1,1,−1,1,1,1,1,1,…$$
(b) Sequence Divergent to $+infty$ $$a_n = n$$(c) Sequence Divergent to $-infty$ $$a_n = -n$$
(d) Infinitely Oscillatory Sequence
A sequence that keeps oscillating without settling.
Example:
$$a_n = (-1)^n$$ or $$a_n = sin(n)$$
Terms
(a) Critical region

(b) One-tailed and two-tailed tests

(c) Standard error

(d) P-value method of hypothesis testing

(e) Orthogonal Matrix

(f) Idempotent Matrix

(g) Eigenvalue, Eigenvector, Characteristic Equation

(h) norm, inner product, linear independence of vectors
- norm = magnitude of vector
- inner product = dot product

(i) Differences
a. Parameter vs. Statistic

b. Type I and Type II errors

c. Normal distribution and Standard normal distribution
