\[\textbf{Proof logarithm}\] \[ \begin{aligned} a^b + 1 &= a^b + a^0 \\ b^a + 1 &= b^a + b^0 \\ a^x a^y &= a^{x + y} \qquad \text{By } \log \text{definition} \\ \log_a (a^x a^y) &= x + y \\ \log_a a^x a^y &= x + y = \log_a a^x + \log_a a^y \\ \Rightarrow \log a^x a^y &= \log a^x + \log a^y \end{aligned} \]

\[\textbf{Ring}\] $\text{Let a, b, c } \in \mathbf{R}$ $\text{There are two binary operations: addition and multiplication. They satisfy}$
\[ \text{Associative Law} \] \[ a \times b \times c = a \times (b \times c) \] \[ \text{Distritutive Law} \] \[a \times (b + c) = a \times b + a \times c \] \[\text{Additive inverse} \] \[\text{For all a in $\mathbf{R}$, there exists b such that}\] \[a + b = 0 \] \[ \text{Multiplicative identity} \] \[ \forall a \in \mathbb{R}, \quad \exists 1 \in \mathbb{R}\text{ such as }\] \[1 \times a = a \times 1\]

\[\textbf{Ideal}\] Let $(\mathbf{I}, +)$ is the subgroup of $(\mathbf{R}, +)$ Let $a \in \mathbf{I}$ and $r \in \mathbf{R}$, then $a \cdot r, r \cdot a \in \mathbf{I}$, $\mathbf{I}$ is called ideal of $\mathbf{R}$
For example:
$2\mathbb{Z}$ is the ideal of $\mathbb{Z}$

Proof:
\[ \text{let } a \in \mathbb{Z}, r = 2k \in 2\mathbb{Z} \quad \text{ where } a, k \in \mathbb{Z} \\ a \cdot r, r \cdot a = 2a \cdot k \in 2\mathbb{Z} \\ \implies 2\mathbb{Z} \text{ is ideal of } \mathbb{Z} \\ \] Following screenshot can represent the ring $\mathbf{R} = (\mathbb{Z}, \mathbb{Z})$ and and ideal $\mathbf{I} = (2\mathbb{Z}, 2\mathbb{Z})$ blue dots

\[\textbf{Left Ideal}\] if $(I, +)$ is subgroup of $(R, +)$ and $\forall r \in R, \forall x \in I$ , $r x \in I$, then $(I, +)$ is called the left ideal of $(R, +)$
Similarly, $(I, +)$ is called right idea of $(R, +)$ if $\forall r \in R, \forall x \in I, xr \in I$

\[\textbf{Principle Ideal}\] The ideal is generated by one element from $R$, $R$ is communtative $\textbf{Ring}$. if $a \in R$ then $Ra \in I$, then $\mathbb{I}$ is principle ideals
$\left< 2 \right>$, $2\mathbb{Z}$ is principle ideal.


Integral Domain
Integral domain is commutative \textbf{Ring} $R$, if $a, b \in R$ and $a, b \neq 0$ then $ab \neq 0$
Euclidean Domain
An integral domain is called Euclidean if there exists function $f: R\backslash\{0\} \rightarrow \mathbb{N} \text{ satisfies the two properties:}$
1. $f(a) < f(ab) \text{ for all nonzero } a, b \in R$
2. $\forall a, b \in R \text{ with } b \neq 0 , \text{ there exists } q, r \text{ such that } a = q*b + r \text{ where } f(r) < f(q)$
\[ \textbf{Ring homomorphism} \] Let $\phi$ is a function between two rings $R$, then $\phi$ is a $\mathit{ring}$ homomorphism if
for all $a \in R$ and $b \in R$
\[\phi(a+b) = \phi(a) + \phi(b)\] \[\phi(ab) = \phi(a)\phi(b)\] and \[\phi(1) = 1\] Ideal
Let $R$ be a ring and let $I$ is additive subgroup of $R$, then $I$ is called an ideal of $R$ and write $I \triangleleft R$
if $\forall a \in I$ and $\forall r \in R $, and $ ar \in I$ and $ra \in I$

Example
$R = (\mathbb{N}, +)$ and $I = (2k, +) \quad k \in \mathbb{N}$
\[\text{Let I be a kernal of } \phi, \text{ then I is an ideal of R} \] Let $a \in I$ and $r \in R$, then $\phi(ra) = \phi(r)\phi(a)$
$I$ is kernal of $\phi$
$\Rightarrow \phi(a) = 0 \therefore \phi(ra) = 0, \therefore ra \in I$
Let $\phi: \mathbb{C} \rightarrow \mathbb{C}$ be the map send a complex number to its complex conjugate. Then $\phi$ is an automorphism of $\mathbb{C}$. $\phi$ is its own inverse.

\begin{equation} \begin{aligned} \phi(z) &= \overline{z}\\\\ \phi(z_1 + z_2) &= \overline{z_1 + z_2}\\ \overline{z_1 + z_2} &= \overline{z_1} + \overline{z_2}\\ \phi(z_1 z_2) &= \overline{z_1 z_2}\\ \overline{z_1 z_2} &= \overline{z_1} \cdot \overline{z_2} \nonumber\\ \phi(\phi(z)) &= z
\end{aligned} \end{equation}
Let $\phi: \mathbb{R}[x] \rightarrow \mathbb{R}[x]$ be the map that send $f(x)$ to $f(x+1)$.
Then $\phi$ is an automorphism of \mathbb{R}[x]. The inverse map sends $f(x)$ to $f(x-1)$

\[ \textbf{ Semigroup } \] Semigroup is a set $S$ and a binary operator $\otimes \colon S \times S \rightarrow S$ that satisfies associative property
\[ \forall \text{ a, b, c} \in S \text{ such as } a \otimes b\otimes c = a \otimes (b \otimes c) \]

\[ \textbf{ Monoid } \] A monoid is a triple $(S, \otimes, \overline{1})$
1. $\otimes$ is closed associative binary operator on the set $S$
2. $\overline{1}$ is identity element for $\otimes$
$\forall\quad a, b, c \in S$ such as
\[ a \otimes b \otimes c = a \otimes (b \otimes c) \] \[ a \otimes \overline{1} = \overline{1} \otimes a = a \]

abstract class SemiGroup[A]{
    def add(x: A, y:A):A
}

abstract class Monoid[A] extends SemiGroup[A]{
    def unit: A
}

object MyMonoid extends App{
      implicit object StringMonoid extends Monoid[String]{
          def add(x: String, y:String):String = x concat y
          def unit: String = ""
      }

      implicit object IntMonoid extends Monoid[Int]{
          def add(x: Int, y:Int):Int = x + y
          def unit: Int = 0
      }
      def sum[A](xs: List[A])(implicit m: Monoid[A]): A = {
          if(xs.isEmpty) m.unit 
          else m.add(xs.head, sum(xs.tail))
      }
      println(sum(List(1, 2, 3)))
      println(sum(List("a", "b", "c")))
}

Definition of Group
$\text{Let a, b, c} \in \mathbf{G}$
There is binary operation * and satisfy
Closure Law
$ a*b \in \mathbf{G} $
Associative Law
$ a*b*c = a*(b*c)$
Identity
$\exists \mathit{e} \in \mathbf{G} \text{ such that } \mathit{e}*a = a*\mathit{e} \in \mathbf{G}$
Inverse
$ \text{If a } \in \mathbf{G}, \exists a^{-1} \in \mathbf{G} \text{ such that } a*a^{-1} = e $
Definition of SubGroup
Given a group $(G, \otimes)$
1. $H$ is the subset of $G$
2. $H$ forms a group under the same binary operation as $G$
$(\mathbb{Z}, +)$
$(\mathbb{2Z}, +)$
Coset of a Group
Coming soon
Normal Subgroup
Coming soon
Group homomorphism(operation preserving)
Given group $(G, \oplus)$ and $(H, \otimes)$,for all $a, b \in G$
If there is function $\phi : G \rightarrow H$
If $\phi(a \oplus b) = \phi(a) \otimes \phi(b)$, then $\phi$ is group homomorphism from $(G, \oplus)$ to $(H, \otimes)$

Concreate example
Given $G(\mathbb{R}, +)$ and $H(\mathbb{R}, *)$, then $\phi(x) = e^x$ is homomorphism from $G(\mathbb{R}, +)$ to $H(\mathbb{R}, *)$
\begin{align*} \forall &a, b \in \mathbb{R} \\ \phi(a + b) &= e^{a + b} \\ \phi(a)*\phi(b) &= e^{a}*e^{b} = e^{a+b} \\ \Rightarrow \phi(a_1 + b_1) &= \phi(a_2)*\phi(b_2) \\ \Rightarrow \phi(x) &= e^{x} \text{ is homomorphism from } G(\mathbb{R}, +) \text{ and } H(\mathbb{R}, *) \\ \end{align*} Note: $\phi$ will map the identity of $G$ to identity of $H$ automatically, and we can derive it, Why Not?
\begin{align*} &\forall a, b \in \mathbb{R} \\ &\text{Let a = 0, b = 0} \\ &\phi(0 + 0) = e^{0 + 0} = 1 \\ &\phi(0)*\phi(0) = e^{0}*e^{0} = 1*1 = 1 \\ &\Rightarrow \phi(0_G) = 1_H \\ &\Rightarrow \phi \text{ map the identity } 0_G \in G \text{ to the identity} 1_H \in H \quad \square \end{align*}

\[\textbf{Vector Space}\] $\text{Let }\vec{u}, \vec{v}, \vec{w} \in \vec{V} \text{ and scalars } \alpha, \beta \in \mathbb{F}$
Closure
$\vec{u} + \vec{v} \text{ and } \in \vec{V}$
Associative Law
$\vec{u} + \vec{v} + \vec{w} = \vec{u} + (\vec{v} + \vec{w})$
Commutative Law
$\vec{u} + \vec{v} = \vec{v} + \vec{u} $
Identity element of addition
$\exists \vec{0} \in \vec{V} \text{ such that } \forall \vec{u} \in \mathbb{V}$
Inverse element of addition
$\exists -\vec{u} \text{ such that } \vec{u} + (-\vec{u}) = \vec{0}$
Identity element of scalar multiplication
$\exists \mathit{1} \in \mathbb{F} \text{ such that } \mathit{1}\vec{u} = \vec{u}$
Distributivity of scale multiplication with respect to vector addition
$\alpha(\vec{u} + \vec{v}) = \alpha\vec{u} + \alpha\vec{v}$
Distributivity of scale multiplication with respect to field addition
$(\alpha + \beta)\vec{u} = \alpha\vec{u} + \beta\vec{u}$

\[ \textbf{Linear Transformations} \] \begin{aligned} & \mbox{A function } \mathit{T}: \mathbb{R}^n \rightarrow \mathbb{R}^m \mbox{ is called linear transformation, if it satisfies} \\ & \mathit{T} ( \mathbf{u} + \mathbf{v} ) = \mathit{T}(\mathbf{u}) + \mathit{T}(\mathbf{v}) \quad \forall \; \mathbf{u} \,, \mathbf{v} \in \mathbb{R}^n\\ & \mathit{T} ( \lambda \mathbf{u} ) = \lambda \mathit{T}(\mathbf{u}) \quad \mbox{all scalars } \lambda \\ \end{aligned}

\[\textbf{Euclidean Space}\] Let $V \in \mathbb{R}^n$ is vector space and an inner product is a function $ \left< , \right>: V \times V \rightarrow \mathbb{R}$, that is $(x, y) \rightarrow \left< x \,, y \right>$ for $x, y \in V$, which satisfied following axioms: \begin{equation} \begin{aligned} \langle ax \,, y \rangle &= a \langle x \,, y \rangle \\ \langle x \,, by \rangle &= b \langle x \,, y \rangle \\ \langle x + y \,, z \rangle &= \langle x \,, z \rangle + \langle y \,, z \rangle \\ \langle x\,, y + z \rangle &= \langle x \,, y \rangle + \langle x \,, z \rangle \\ \langle y \,, x \rangle &= \langle x \,, y \rangle \\ \langle x \,, x \rangle &> 0 \quad x \neq 0 \quad \text{positive definite}\\ \end{aligned} \end{equation} If the inner product is defined as $ \left< u, v \right> = u^{T}v $ , we have Euclidean Structure $(\mathbb{R}^{n}, u^{T}v)$ or $(\mathbb{R}^{n}, \circ)$

\[\textbf{Euclidean Structure}\] Euclidean Structure is defined by Inner product \[ \langle \vec{u}, \vec{v} \rangle = \sum_{k=1}^{n} u_{k} v_{k}\] Length function is defined by the norm of Inner product \[ \| \langle \vec{u}, \vec{u} \rangle \| = \sqrt{ \sum_{k=1}^{n} u_{k}^2 }\] Distance function is called Euclidean metric. The formula expresses a special case of Pythagorean Theorem. \[ d(u - v) = \| u - v \| = \sqrt{ \sum_{k=1}^{n} (u_{k}-v_{k})^2 }\] The angle between $\vec{u}$ and $\vec{u}$ is given by \[ \beta = \arccos \frac{ \langle \vec{u}, \vec{v} \rangle }{\|\vec{u}\| \|\vec{v}\| } \]


\[\textbf{Affine Space}\] An affine space is a set of points that admits free transitive action of a vector space $\vec{V}$ That is, there is a map $X \times \vec{V} \rightarrow X:(x, \vec{v}) \mapsto x + \vec{v}$,
called translation by a vector $\vec{v}$, such that
1. Addition of vectors corresponds to composition of translation, i.e., for all $x \in X \text{ and } \vec{u}, \vec{v} \in \vec{V}, (x + \vec{u}) + \vec{v} = x + (\vec{u} + \vec{v})$
2. The zero vector $\vec{0}$ acts as the identity vector, i.e., for all $x \in X, x + \vec{0} = x$
3. The action is transitive, i.e., for all $x, y \in X, \text{ exists } \vec{v} \in \vec{V} \text{ such that } y = x + \vec{v}$
4. The dimension of X is the dimension of vector space translations, $\vec{V}$

Or There is unique map
$X \times X \rightarrow \vec{V}:(x, y) \mapsto y - x \text{ such that } y = x + (y - x) \text{ for all }x, y \in X$
It furthermore satifies
1. For all $x, y, z \in X, z - x = (z - y) + (y - x)$
2. For all $x, y, \in X$ and $\vec{u}, \vec{v} \in \vec{V}$, $ (y + \vec{v}) - (x + \vec{u}) = (y - x) + (\vec{v} - \vec{u})$
3. For all $x \in X, x - x = \vec{0}$
4. For all $x, y \in X, y - x = -(x - y)$

\[ \textbf{Affine Space from linear system equation} \] Consider an $(m \times n)$ linear sytem equations
$\sum_{k=1}^{n} a_{i k} x_{k} = c_{i}, (1 \leq i \leq m) \quad\quad\quad \text{(1)}$
where $d = n - rank(M), c_{i} \ne \vec{0} \in \mathbb{R}^{m}$
When the system has at least one solution $x_{p}$ then the full set of solution is a d-dimension affine space
$A \subset \mathbb{R}^{n}$
Since $x_{p} \in A, \text{ we can declare point } x_{p} \text{ as origin of A and then introduct A coordinates as follows:homogenous system}$
$\sum_{k=1}^{n} a_{i k} x_{k} = \vec{0} \quad (1 \leq i \leq m)$
$\Rightarrow dim(\ker(M)) = d \quad \text{(Rank Theorem)}$
$\Rightarrow \text{(1) has d-linear independent solution } \vec{b_{j}} \in \mathbb{R}^{n} \quad\quad (1 \leq j \leq d)$
Affine Space $A$ can be written as
$A = \Big\{ x_{p} + \sum_{j=1}^{d}\alpha_{j}\vec{b_{j}} \quad \mid \quad \alpha_{j} \in \mathbb{R} \quad\quad (1 \leq j \leq d)\Big\} $
$\text{The } \alpha_{j} \text{ can be served as coordinates in A, so that A looks as it were a d-dimension coordiate space.}$
$\text{But note that addition(+) in the space refers to the chosen point } x_{p}, \text{ and not to the origin of the base vector space}$

\[ \textbf{Affine space and linear system} \] The solution set $\mathit{K}$ of any system $\mathbf{A}\mathbf{x}=\mathbf{b}$ of $m$ linear equations in $n$ unknowns is an affine space, namely a coset of $\ker{T_{A}}$ represented by a particular solution $\mathbf{s} \in \mathbb{R}^{n}$ \[ \mathit{K} \in \mathbf{s} + \ker{T_{A}} \] $\mathbf{Proof}$: If $\mathbf{s} \,, \mathbf{w} \in \mathbf{K}$, then $\mathbf{A}(\mathbf{s} - \mathbf{w}) = \mathbf{A}\mathbf{s} - \mathbf{A}\mathbf{w} = \mathbf{b} - \mathbf{b} = \mathbf{0}$ so that $\mathbf{s} - \mathbf{w} \in \ker{T_{A}}$. Now let $\mathbf{k} = \mathbf{s} - \mathbf{w} \in \ker{T_{A}}$. Then \[ \mathbf{w} = \mathbf{s} + \mathbf{k} \in \ker{T_{A}} \] Hence $\mathbf{K} \subseteq \mathbf{s} + \ker{T_{A}}$. To show the conversion inclusion, suppose $\mathbf{w} \in \mathbf{s} + \ker{T_{A}}$. Then $\mathbf{w} = \mathbf{s} + \mathbf{K}$ for some $\mathbf{k} \in \ker{T_{A}}$. But then \[ \mathbf{A}\mathbf{w} = \mathbf{A}(\mathbf{s} + \mathbf{k}) = \mathbf{A}\mathbf{s} + \mathbf{A}\mathbf{k} = \mathbf{b} + \mathbf{0} = \mathbf{b} \] so $\mathbf{w} \in \mathit{K}$, and $\mathbf{s} + \ker{T_{A}} \subseteq \mathit{K}$. Thus, $\mathit{K} = \mathbf{s} + \ker{T_{A}} \quad \square$

\[\textbf{If } \gcd(a, b) = 1 \textbf{ and } a \vert bc\] \[ \textbf{Prove } a \vert c \] $\gcd(a, b) = 1 $
$\Rightarrow \exists m, n \in \mathbf{N} \quad ma+nb = 1$
$\Rightarrow mac + nbc = c$
$\Rightarrow ak = bc \quad k \in \mathbf{N} \because a \vert bc \quad $
$\Rightarrow mac + n(ak)=c \quad (ak=bc) $
$\Rightarrow a(mc + nk) = c$
$\Rightarrow a \vert c $

\[\textbf{Theorem 1}\] The image of transformation is spanned by the image of the any basis of its domain.
For $T:\vec{V} \rightarrow \vec{W}, \text{ if } \beta=\{ \vec{b_1},\vec{b_2},...,\vec{b_n} \} \text{ is a basis of }\vec{V}, \text{ then }T(\beta) = \{ T(\vec{b_1}), T(\vec{b_2}), ... ,T(\vec{b_n})\} \text{ spans the image of }T$

For all $\vec{v} \in \vec{V}, \vec{v} = \alpha_1\vec{b_1} + \alpha_2\vec{b_2} + ... + \alpha_n\vec{b_n}$
$\Rightarrow T(\vec{v}) = T(\alpha_1\vec{b_1} + \alpha_2\vec{b_2} + ... + \alpha_n\vec{b_n})$
$\Rightarrow T(\vec{v}) = \alpha_1 T(\vec{b_1}) + \alpha_2 T(\vec{b_2}) + ... + \alpha_n T(\vec{b_n})$
$\Rightarrow \{ T(\vec{b_1}), T(\vec{b_2}),...,T(\vec{b_n})\} \text{ spans the image of }T$

\[\textbf{Rank Theorem} \] If the domain is finite dimension, then the dimension of domain is the sum of rank and nullity of the transformation
$\text{Let } T:\vec{V} \rightarrow \vec{W} \text{ be a linear transformation },\text{let n be the dimension of }\vec{V},$
$\text{let k be nullity of }T \text{ and let k be the rank of }T$
$\text{Show } n = k + r$
$\text{Let }\beta = \{ \vec{b_1}, \vec{b_2},...,\vec{b_k}\} \text{ be the basis of kernal of }T, \text{ the basis can be extended to } \gamma = \{ \vec{b_1}, \vec{b_2},...,\vec{b_k}, \vec{b_{k+1}},...,\vec{b_n}\}$
$\text{let }\vec{v} \in \vec{V} \Rightarrow \vec{v} = \alpha_1 \vec{b_1} + \alpha_2 + \vec{b_2} +,..., + \alpha_k \vec{b_k} + \alpha_{k+1}\vec{b}_{k+1}+,...,+\alpha_{n}\vec{b_n}$
$\text{Let }T(\vec{v}) = T(\alpha_1 \vec{b_1} + \alpha_2 + \vec{b_2} +,..., + \alpha_k \vec{b_k} + \alpha_{k+1}\vec{b}_{k+1}+,...,+\alpha_{n}\vec{b_n}) = \vec{0}$
$\Rightarrow \vec{v} = \alpha_1 \vec{b_1} + \alpha_2 + \vec{b_2} +,..., + \alpha_k \vec{b_k} + \alpha_{k+1}\vec{b}_{k+1}+,...,+\alpha_{n}\vec{b_n} \in \ker(T) \quad\quad \text{(1)}$
$\because \vec{v} = \sigma_1 \vec{b_1} + \sigma_2 + \vec{b_2} +,..., + \sigma_k \vec{b_k} \in \ker(T) \quad\quad \text{(2)}$
$(1) - (2) \Rightarrow \vec{0} = (\alpha_1-\sigma_1)\vec{b_1} + (\alpha_2 - \sigma_2)\vec{b_2}+,...,+ (\alpha_k - \sigma_k)\vec{b_k}+ \alpha_{k+1}\vec{b}_{k+1}+,...,+\alpha_{n}\vec{b_n} $
$\because \vec{b}_{1}, \vec{b}_{2},...,\vec{b}_{k},\vec{b}_{k+1}, \vec{b}_{k+2},...,\vec{b_n} \text{ are linearly independent}$
$\therefore \alpha_{k+1}, \alpha_{k+2}, ... , \alpha_{n} \text{ are all zero} \quad\quad \text{(3)}$
$T(\vec{v}) = T(\alpha_1 \vec{b_1}) + T(\alpha_2 \vec{b_2}) +,..., + T(\alpha_k \vec{b_k}) + T(\alpha_{k+1}\vec{b}_{k+1})+,...,+T(\alpha_{n}\vec{b_n}) = \vec{0}$
$T(\vec{v}) = \alpha_1 T(\vec{b_1}) + \alpha_2 T(\vec{b_2}) +,..., + \alpha_k T(\vec{b_k}) + \alpha_{k+1}T(\vec{b}_{k+1})+,...,+\alpha_{n}T(\vec{b_n}) = \vec{0}$
$\because \beta = \{ \vec{b_1}, \vec{b_2},...,\vec{b_k}\} \text{ is the basis of kernal of }T$
$\therefore T(\vec{b_1}) = \vec{0},..., T(\vec{b_k}) = \vec{0}$
$\therefore T(\vec{v}) = \alpha_{k+1}T(\vec{b}_{k+1})+,...,+\alpha_{n}T(\vec{b_n}) = \vec{0} \quad\quad \text{(4)}$
$\text{(3) and (4)} \Rightarrow \{ T(\vec{b}_{k+1}), T(\vec{b_{k+2}}), ... , T(\vec{b_{n}}) \} \text{ are linearly independent}$
$\Rightarrow \dim(\vec{V}) = \text{ nullity(T) } + \text{ rank(T) } \text{ or }$
$\Rightarrow \dim(\vec{V}) = \dim(\ker(T)) + \dim(\text{img(T)}) $
$\Rightarrow n = k + r \quad \square$

Visualize the Kernel of Matrix


\[\textbf{Chart}\] A $\textbf{Chart}$ on a set $M$ is a pair $(\phi, U)$ where $U$ is an open subset of $M$ and $\phi: U \rightarrow \phi(U)$ is bijection from $U$ to an open subset $\phi(U)$ in $\mathbb{R}^{m}$ An $\textbf{Atlas}$ is collection of $\mathscr{A} = \{(\phi, U)\}_{\alpha \in A}$ of charts such that the domains $U_{\alpha}$ conver $M$ \[ M = \bigcup_{\alpha \in A} U_{\alpha} \] Example:
Every open subset $U \in M$ has an Altas consisting of a single chart $(\phi, U) = (id_{U}, U)$, where $id_{U}$ denoted identity map of $U$
\[\textbf{Topological Space}\] $\textbf{A topological space}$ is pair $(X, T)$ where $X$ is a set and $U$ is subset of $X$ satifying certain axioms. $T$ is called topology
1. $\emptyset \in T$ and space $X \in T$
2. If $U_1 \in T, U_2 \in T$, then $U_1 \cap U_2 \in T$ $\textbf{ finite}$
3. If $U_i \in T$ then $\bigcup_{i \in I} U_i \in T$ $\textbf{ finite or infinite}$

\[ X = \{1, 2\} \\ T = \{ \emptyset, \{1\}, \{2\}, \{1, 2\}\} \] $(X, T)$ is topological space becase it satisfied three axioms

The elements of $T$ is called open sets,
Property 2 implies any $\mathbf{finite}$ intersection of open sets is open
Property 3 implies union of any open sets is open
Any collection of subsets of $X$ satifies above properties is called $\textbf{topology}$ on $X$
\[ \textbf{Topology} \] $\text{Let }\mathcal{M} \text{ be a set. A topology }\mathcal{Q} \text{ is a subset } \mathcal{Q} \subseteq \mathcal{P}(\mathcal{M})$ Satisfy
$1. \varnothing\subseteq \mathcal{Q}, \mathcal{M} \subseteq \mathcal{Q}$
$2. \mathcal{U} \subseteq \mathcal{Q}, \mathcal{V} \subseteq \mathcal{Q} \implies \mathcal{U} \cap \mathcal{V} \in \mathcal{Q}$
$3. \mathcal{U} \in \mathcal{Q} \implies \bigcup_{\alpha \in \mathcal{A}} \mathcal{U}_\alpha \in \mathcal{Q}$

Homeomorphism - deformation, stretches, push around, shrink, enlarge without rips
A function $f:X \rightarrow Y$ between two topological spaces $(X, T_x)$ and $(Y, T_y)$ is called $\textbf{Homeomorphism}$ if it has the following properties:
1. $f$ is bijective
2. $f$ and $f^{-1}$ are both continuous
Take the donus to coffee cup, you can stretch and push around without rips.
Continuous and Invertable funciton from one topological space to other topoligical space.

\[\textbf{Smooth}\] Given $f: U \rightarrow V$ $f$ is continuous in $U$ and all derivatives of the $f_i$ of any orders exist

\[\textbf{Diffeomorphic}\] 1. $f$ is Homeomorphic
2. $f$ and $f^{-1}$ both are differentiable in any order.(smooth or $C^{\infty}$)

Sequenently, for every $u \in \tau_x$, there is $v \in \tau_y$ such that $v = f(u)$ and $u = f^{-1}(v)$
Furthere more, since \[ f(u_1 \cap u_2) = f(u_1) \cap f(u_2) \\ f(u_1 \cup u_2) = f(u_1) \cup f(u_2) \] the equavalence extends to the structures of the spaces

Injective
$\text{if } f(x_1) = y_2 \text{ and } f(x_2) = y_2 \text{ and } y_1 = y_2, \text{ then } x_1 = x_2 $

Surjective
$\forall y \in Y \quad \exists x \in X \text{ such as } f(x) = y $

Bijective
$f \text{ is injective and surjective} $
Voronoi Diagram
Definition of Voronoi Diagram
For $p, q \in S$ let \[ B(p, q) = \{ x \mid d(p, x) < d(p, x) \} \] be the bisector of $p, q$, $B(p, q)$ is perpendicular to line through the center of line segment $\overline{pq}$
Given a set $S$ of n points in a plane, we wish to associate with each point $s$ a region consisting of all points in the plane closer to $s$ than any other point $s'$ in $S$. The can be described formally as \[ \mathbf{Vor}(\mathbf{s}) = \{p: \textbf{dist}(s, p) \leq \textbf{dist}(s', p), \forall s' \in S \} \] Where $\mathbf{Vor}(\mathbf{s})$ is the Voronoi region for a point $s$

Two points Voronoi Diagram

Three points Voronoi Diagram

Differential, Forwad Differentiation, can be implemented in Haskell \begin{align*} (x + \varepsilon x') + (y + \varepsilon y') &= (x + y) + \varepsilon(x' + y') \\ (x + \varepsilon x')(y + \varepsilon y') &= xy + \varepsilon(x'y + y'x) \\ f(x + \varepsilon x') &= f(x) + \varepsilon f'(x)x' \\ f(g(x + \varepsilon x')) &= f(g(x + \varepsilon x')) + \varepsilon f'(g(x + \varepsilon x')) \\ f(g(x + \varepsilon x')) &= f( g(x) + \varepsilon g'(x) x') \\ f(g(x + \varepsilon x')) &= f(g(x)) + \varepsilon f'(g(x)) g'(x)x' \\ \end{align*}
Mean Value Theorem in Calculus
If $f$ is continuous on closed interval $[a, b]$ and differential on open interval $(a, b)$, then there exists point $c$ such that \begin{align*} f'(c) = \frac{f(b) - f(a)}{b - a} \end{align*}

What is Algebra
We all talk about Algebra from high school to University, but we hardly were given any defintion of Algebra.
This is the defintion from Hopf Lecture Notes

An algebra $A$ can be defined as triple $A=(V, *, I)$ where $V$ is Vector Space, $*$ is multiplication and $I$ is identity
Given Vector Space $V$ over the field $K$, e.g. $K = \mathbb{C}$, Algebra satisfies following properties:
\begin{align*} &a, b, c \in V \\ &*: V \times V \rightarrow V \quad \text{Bilinearity} \tag{1}\\ &a * b * c = a * (b * c) \quad \text{Associativity} \\ &1*a = a*1 \quad \text{Unitality} \\ \end{align*}
Example: n dimension matrix over the field $K = \mathbb{C}$
It is easy to show all the matrices form a Vector Space since it satisfied all the Vector Space properties
\begin{align*} &\forall m_1, m_2, m_3 \in V \\ &m_1 + m_2 + m_3 = m_1 + (m_2 + m_3) \\ &m_1 + 0 = 0 + m_1 \\ &m_1 + m_2 = 0 \\ &... \\ &\text{ What is the basics of a Vector Space in two dimensions, e.g.} \\ &A= \begin{bmatrix} 1 & 2 \\ 3 & 4 \\ \end{bmatrix} \\ &\text{ The following is the basics of for two dimensions matrix } \\ &\left\{ e_1= \begin{bmatrix} 1 & 0 \\ 0 & 0 \\ \end{bmatrix}, e_2= \begin{bmatrix} 0 & 1 \\ 0 & 0 \\ \end{bmatrix}, e_3= \begin{bmatrix} 0 & 0 \\ 1 & 0 \\ \end{bmatrix}, e_4= \begin{bmatrix} 0 & 0 \\ 0 & 1 \\ \end{bmatrix} \right\} \\ &\text{Any 2x2 matrix can be written in linear combination of } \left\{e_1, e_2, e_3, e_4 \right\} \\ &A = 1e_1 + 2e_2 + 3e_3 + 4e_4 \\ \end{align*} Once we have a Vector Space, We can define the multiplication$(*)$ from $*: A \times A \rightarrow A$

We know Inner Product is Bilinearity, let's see the meaning of Bilinearity for Inner Product
\begin{align*} &\forall u, v, w \in V, \alpha \in F, \text{ e.g. } F = \mathbb{R} \\ &\left< , \right> : V \times V \rightarrow \mathbb{R} \quad \text{ definition of Inner Product} \\ &\left< u, v \right> = u^{T}v \\ &\left< u + v, w \right> = \left< u, w \right> + \left< v, w \right> \quad \text{ linear on the left} \\ &\left< u, v + w \right> = \left< u, v \right> + \left< u, w \right> \quad \text{ linear on the right} \\ \end{align*} It is very similar to distributive law, e.g.
\begin{align*} a(b + c) &= ab + ac = (b + c)a \quad \text{Multiplication is bilinear}\\ \left< a, b + c \right> &= \left< a, b \right> + \left< a, c \right> \\ \end{align*} The multiplication can be defined as matrix multiplication
\begin{align*} &*: V \times V \rightarrow V \\ &*: M \times M \rightarrow M \quad \text{ where } V = M\\ &*:(M, M) \rightarrow M \quad \text{Prefix notation}\\ &M*M \rightarrow M \quad \text{Infix notation} \\ &\forall A, B, C \in M, \alpha \in \mathbb{F} \\ &\text{Check the Associativity} \\ &A * B * C = A * (B * C) \\ &\text{Check the Bilinearity} \\ &A(B + C) = A B + A C = (B + C)A \\ &\text{Check Unitarity} \\ &I A = A I \quad \text{ where $I$ is identity matrix}\\ & \alpha A B = \alpha (AB) = A (\alpha B) = \end{align*} We did not need to check the multiplication of scalar and M since it is the property of Vector Space, not the Bilinear property

Bilinear Form
Definition of Bilinear form on a Vector Space $V$ over the field $\mathbb{F}$ is a map \[ H: V \times V \rightarrow \mathbb{F} \\ \] The map satifieds following properties:
\begin{align*} &\forall u, v, w \in V, \alpha \in \mathbb{F} \\ &H(u + v, w) = H(u, w) + H(v, w) \\ &H(u, v + w) = H(u, v) + H(u, w) \\ &H( \alpha u, v) = \alpha H(u, v) \\ &H( u, \alpha v) = \alpha H(u, v) \\ \end{align*} The $H$ is similar like $\times$ multiplication that we have known
\begin{align*} H(w, u + v) &= H(w, u) + H(w, v) \\ \times(w, u + v) &= \times(w, u) + \times(w, v) \\ w \times (u + v) &= w \times u + w \times v \\ \end{align*} What we have familiarized is Inner Product, but Inner Product is more specific
\begin{align*} \left< u, u \right > \geq 0 \Leftrightarrow u = 0 \\ \end{align*}
Matrix bilinear form
Definition: Let $g$ be a bilinear form on space $V$, and let $\mathcal{\beta} = \{b_1, b_2, ... , b_n \}$ be a basis of $V$. Then the Matrix $G = (g_{i,j}) = g(b_i, b_j)$ is called the matrix of bilinear form $g$ with respect to the basis $\beta$, we will call matrix $G$ a Gram matrix of $g$

let $v, w \in V$ are represented in $\beta$ as \begin{align*} \left[ v \right]_{\beta} &= \left[ \begin{array}{c} \alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_3 \end{array} \right] \quad \left[ w \right]_{\beta} = \left[ \begin{array}{c} \sigma_1 \\ \sigma_2 \\ \vdots \\ \sigma_3 \end{array} \right] \\ v &= \left[ \alpha_1, \alpha_2, \dots \alpha_n \right] \left[ b_1, b_2, \dots b_n \right] \\ v &= \alpha_1 b_1 + \alpha_2 b_2 + \dots + \alpha_n b_n \tag{1}\\ w &= \sigma_1 b_1 + \sigma_2 b_2 + \dots + \sigma_n b_n \tag{2} \end{align*} then bilinearity By Linearity \begin{align*} g(v, w) &= g(\sum_{i=1}^{n} \alpha_i b_i, \sum_{j=1}^{n} \sigma_j b_j) \\ g(v, w) &= g(\alpha_1 b_1 + \dots + \alpha_n b_n, \sum_{j=1}^{n} \sigma_j b_j) \\ g(v, w) &= g(\alpha_1 b_1, \sum_{j=1}^{n} \sigma_j b_j) + \dots + g(\alpha_n b_n, \sum_{j=1}^{n} \sigma_j b_j)\\ g(v, w) &= \sum_{i=1}^{n} g(\alpha_i b_i, \sum_{j=1}^{n} \sigma_j b_j)\\ g(v, w) &= \sum_{i=1}^{n} g(\alpha_i b_i, \sigma_1 b_1 + \dots + \sigma_n b_n)\\ g(v, w) &= \sum_{i=1}^{n} g(\alpha_i b_i, \sigma_1 b_1) + \dots + \sum_{i=1}^{n} g(\alpha_i b_i, \sigma_n b_n)\\ g(v, w) &= \sum_{i=1}^{n} \sum_{j=1}^{n} g(\alpha_i b_i, \sigma_j b_j) \\ g(v, w) &= \sum_{i=1}^{n} \sum_{j=1}^{n} \alpha_i \sigma_j g(b_i, b_j) \\ \text{From} (1) (2) \\ g(v, w) &= \left[ v \right]_{\beta}^{T} G \left[ w \right]_{\beta} \\ \end{align*} Concrete example:
\begin{align*} G &= \begin{bmatrix} 1 & 0\\ 0 & 1 \end{bmatrix} \\ g(v, w) &= \left[ x_1, y_1 \right] G \left[ \begin{array}{c} x_2 \\ y_2 \end{array} \right] \\ g(v, w) &= \left< v, w \right> = x_1 x_2 + y_1 y_2 \\ \\ G &= \begin{bmatrix} 0 & 1\\ -1 & 0 \end{bmatrix} \\ g(v, w) &= \det \left( \begin{bmatrix} x_1 & x_2 \\ y_1 & y_2 \end{bmatrix} \right) = x_1 y_2 - x_2 y_1 \\ \end{align*} A bilinear form is call symmetric form if ...
A bilinear form is call skew symmetric form if ...
matrix bilinear form ...
quadratic form
inner product
cross product
determinant function
http://localhost/pdf/bilinearforms1.pdf

\[ \textbf{Inner Product} \] \[ \text{Positivity} \] \[ \langle\mathbf{v}, \mathbf{v}\rangle \geq 0 \] \[ \langle \mathbf{v} , \mathbf{v} \rangle = \mathbf{0} \iff \mathbf{v} = \mathbf{0}\] \[ \text{Linearity in the first component} \] \[ \langle c_{1}\mathbf{v_1} \,, \mathbf{v_2}\rangle = c_{1}\langle \mathbf{v_1}, \mathbf{v_2}\rangle \] \[ \langle \mathbf{v_1} + \mathbf{v_2} \,, \mathbf{v_3} \rangle = \langle \mathbf{v_1} \,, \mathbf{v_3}\rangle + \langle \mathbf{v_2} \,, \mathbf{v_3}\rangle\] \[ \langle c_{1}\mathbf{v_1} + c_{2}\mathbf{v_2}, \mathbf{v_3}\rangle = c_{1}\langle \mathbf{v_1}, \mathbf{v_3}\rangle + c_{2}\langle\mathbf{v_2}, \mathbf{v_3} \rangle \] \[ \text{Conjugate Symmetic}\] \[ \langle \mathbf{v_1}, \mathbf{v_2} \rangle = \overline{\langle \mathbf{v_2}, \mathbf{v_1} \rangle}\] \[ \text{Properties of Inner product}\] \[ \langle \mathbf{v_1} \,, \lambda \mathbf{v_2} \rangle = \overline{\langle \lambda\mathbf{v_2} \,, \mathbf{v_1} \rangle} = \overline{\lambda} \overline{\langle \mathbf{v_2} \,, \mathbf{v_1} \rangle} = \overline{\lambda} \langle \mathbf{v_1} \,, \mathbf{v_2} \rangle \]
\[ \textbf{Outer Product} \] The outer product $\vec{u} \times \vec{v}$ is equivalent to $u v^{T}$, for instance \[ u \otimes v = u v^{T} = \left[ \begin{array}{c} u_1 \\ u_2 \\ u_3 \end{array} \right] \left[ \begin{array}{cccc} v_1 & v_2 & v_3 & v_4 \end{array} \right] = \begin{bmatrix} u_0 v_1 & u_0 v_2 & u_0 v_3 & u_0 v_4 \\ u_2 v_1 & u_2 v_2 & u_2 v_3 & u_2 v_4 \\ u_3 v_1 & u_3 v_2 & u_3 v_3 & u_3 v_4 \\ \end{bmatrix} \] \[ \textbf{Matrix Multiplication definited as Outer Product} \] \[ \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \\ \end{bmatrix} \begin{bmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \\ \end{bmatrix} = \begin{bmatrix} a_{11} \\ a_{21} \end{bmatrix} \otimes \begin{bmatrix} b_{11} \\ b_{12} \end{bmatrix} + \begin{bmatrix} a_{12} \\ a_{22} \end{bmatrix} \otimes \begin{bmatrix} b_{21} \\ b_{22} \end{bmatrix} = \begin{bmatrix} a_{11} \\ a_{21} \end{bmatrix} \begin{bmatrix} b_{11} & b_{12} \end{bmatrix} + \begin{bmatrix} a_{12} \\ a_{22} \end{bmatrix} \begin{bmatrix} b_{21} & b_{22} \end{bmatrix} \]
Partial Order on Set
An binary relation $\leq$ defined on Set $A$ is called $\textbf{partial order}$ on set $A$ if following conditions are hold identifically on set $A$ \begin{align*} &a \leq a \\ &a \leq b \text{ and } b \leq a \Rightarrow a = b \\ &a \lt b \text{ and } b \lt c \Rightarrow a \lt c \\ &\text{If in additional, for every } a, b \in A \\ &a \leq b \text{ or } b \leq a \\ &\text{ then } \leq \text{ is total order on } A \\ \end{align*}
Projection Matrix, This is the simplest application of OUTER product.
\begin{equation} \begin{aligned} \left< u, v \right> &= \|\vec{u}\| \| \vec{v}\| \cos{\phi} \\ \cos \phi &= \frac{\left < u , v \right >}{\|u\|\|v\| } \\ \vec{u} \text{ project on } \vec{v} \\ \|u\| \cos \phi \frac{v}{\|v\|} &= \|u\| \frac{ \left < u, v \right>}{\|u\|\|v\|} \frac{v}{\|v\|}\\ \|u\| \frac{v}{\|v\|} \cos \phi &= \frac{ \left < u, v \right>}{ \left < v, v \right> } v \\ proj_v &= \frac{\left < u, v \right>}{ \left < v, v \right> } v = v \frac{\left< v, u \right>}{\left< v, v \right>} = \frac{v (v^{T} u)}{v^T v} = \frac{(v v^{T}) u}{v^{T}v} \\ \\ \implies proj_v &= \left < u, v \right> v \quad \text{ where } \|v\| = 1\\ \implies proj_v &= v v^{T} u \quad \text{ where } \|v\| = 1\\ v v^{T} &\text{ is projection matrix from u onto v} \text{ where } \|v\| = 1\\ \end{aligned} \end{equation}
Projection Matrix 2
Given vectors: $u, v$.
Project u onto v, let $p = \sigma v$ where $\sigma$ is a scalar

we have
\[ v \perp (p - u) \] \begin{align*} v^{T} (\sigma v - u) &= 0 \\ \sigma v^{T} v - v^{T} u &= 0 \\ \sigma v^{T} v &= v^{T} u \\ \sigma &= \frac{v^{T}u}{v^{T}v} \\ \sigma v &= v\frac{v^{T} u}{v^{T}v} \\ \sigma v &= \frac{ v v^{T} u } {v^{T} v} \\ \sigma v &= \frac{1}{v^{T} v} v v^{T} u \\ \sigma v &= \frac{v v^{T}}{v^{T} v} u \\ P &= \frac{v v^{T}}{ v^{T} v } \\ \end{align*} Show $P^{2} = P$, $P$ is idenpotent.
\begin{align*} P^{2} &= \frac{v v^{T}}{v^{T} v} \frac{v v^{T}}{ v^{T} v} \\ P^{2} &= \frac{v (v^{T} v) v^{T}}{v^{T} v v^{T} v} \\ P^{2} &= \frac{ v v^T} { v^T v} = P \end{align*} $vv^T$ is the projection matrix which projects $u$ onto $v$ where $|v| = 1$
Concret Example: Let's project $ u = \left[ \begin{array}{c} 1 \\ 1 \end{array} \right]$ onto $ v = \left[ \begin{array}{c} 0 \\ 1 \end{array} \right]$
$ vv^{T} = \left[ \begin{array}{c} 0 \\ 1 \end{array} \right] \left[ \begin{array}{cc} 0 & 1 \end{array} \right] = \begin{bmatrix} 0 & 0\\ 0 & 1 \end{bmatrix}
Proj_v = \begin{bmatrix} 0 & 0\\ 0 & 1 \end{bmatrix} \left[ \begin{array}{c} 1 \\ 1 \end{array} \right] = \left[ \begin{array}{c} 0 \\ 1 \end{array} \right] $

Inner Product is NOT Bilinear Form over the Complex $\mathbb{C}$ Sesquilinear Form and Bilinear Form
What is Bilinear Form: given vector space $V$ and a field $\mathbb{F} = \mathbb{C}$ $H: V \times V \rightarrow \mathbb{F}$
let $u, v, w \in V, \sigma \in \mathbb{F}$, it satisifed following properties:
$H(u + v, w) = H(u, w) + H(v, w)$
$H(u, v + w) = H(u, v) + H(u, w)$
$\color{red}{H(\sigma u, v) = \sigma H(u, v)}$
$\color{red}{H(u, \sigma v) = \sigma H(u, v)}$

Inner Product: given vector space $V$ and a field $\mathbb{F}$ over the Complex Number $\mathbb{C}$, let $u, v, w \in V, \text{ and } \sigma \in \mathbb{F}$
$\left<,\right >: V \times V \rightarrow \mathbb{F}$, it satisfied following properties:
$\left< u + v, w \right > = \left< u, w \right > + \left< v, w \right >$
$\color{red}{\left< \sigma u, w \right > = \sigma^{*} \left< u, w \right >} $
$\color{red}{\left< u, \sigma v \right > = \sigma \left< u, v \right >} $
$\left< u, v \right > = \overline{\left< v, u \right >}$
$\left< u, u \right > \geq 0 $
$\left< u, u \right > = 0 \Leftrightarrow u = 0 $

\begin{align*} &\text{Inner Product over the real $\mathbb{R}$ and complex $\mathbb{C}$} \\ &\left<\sigma u, v \right > = \overline{\sigma} \left< u, v \right > \quad \because \sigma = \overline{\sigma} \quad \sigma \in \mathbb{R} \\ &\left<\sigma u, v \right > \neq \overline{\sigma} \left< u, v \right > \quad \because \sigma \neq \overline{\sigma} \quad \sigma \in \mathbb{C} \\ \end{align*}
Sesquilinear Form and Bilinear Form
Given $ \left< , \right> : V \times V \rightarrow \mathbb{F}$ \begin{align*} &\left< u + v, w \right> = \left< u, w \right> + \left< v, w \right> \\ &\left< u , v + w \right> = \left< u, v \right> + \left< u, w \right> \\ &\left< \sigma u, v \right> = \overline{\sigma} \left< u, v \right> \\ &\left< u, \sigma v \right> = \sigma \left< u, v \right> \\ \end{align*}
Jordan Curve Theorem
Every simple closed plane curve divides the plane into two components
The above theorem is so obvious.
Collinear points
Three or more points are said to be collinear if they lie on a single straigh line

Euclid's three postulates
1. A straigh line segment can be drawn joining any two points
2. A straigh line segment can be extended indefinitely to a straigh line
3. Given any straigh line segment, a straigh line segment can be drawn having the segment as radius and one endpoint as center

Cross Product
Given vectors $u = (b_1, b_2, b_3), v = (c_1, c_2, c_3)$, the cross product of $u, v$ is the following:
\begin{equation} \begin{aligned} u \times v &= w = \begin{bmatrix} i & j & k \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \end{bmatrix} \\ \det{A}&=i (-1)^{1 + 1} \begin{vmatrix} b_2 & b_3 \\ c_2 & c_3 \end{vmatrix} + j (-1)^{2 + 1} \begin{vmatrix} a_2 & a_3 \\ c_2 & c_3 \end{vmatrix} + k (-1)^{3 + 1} \begin{vmatrix} a_2 & a_3 \\ b_2 & b_3 \\ \end{vmatrix} \end{aligned} \end{equation} The direction of cross product can be determinated by the Right Hand Rule
The magnitude can be computed as following:
\[ u \times v = \|u\| \|v\| \sin{\alpha} \vec{n} \quad \text{ where } \vec{n} \text{ is the unit vector that is perpendicular to the plane which contains } \vec{u} \vec{v} \]
The magnitude of cross product is just the volumn of parallelepiped $\vec{u}, \vec{v}, \vec{w}$

Prove the determinant of matrix is equal to the determinant of the transpose matrix
Proof 1:
Given an n by n matrix $m$, prove $\det m = \det m^{T}$
From the $QR$ decomposition, any square matrix can be decomposed into $Q$ is upper triangle matrix and $R$ is orthgonal matrix \begin{equation} \begin{aligned} m &= QR \\ m^{T} &= (QR)^{T} \\ \det m^{T} &= \det (QR)^{T} \\ \det m^{T} &= \det R^{T} \det Q^{T} \\ \det m^{T} &= \det R^{-1} \det Q \quad \text{ where } \det Q = \det Q^{T} \because Q \text{ is upper triangle matrix} \\ R^{T} &= R^{-1} \because R \text{ is othgonal matrix} \\ \det m^{T} &= \frac{\det Q}{\det R} \\ \det m^{T} &= \det Q \quad \text{ where } \det R = 1 \because R \text{ is orthgonal matrix} \\ \Rightarrow \det m &= \det (QR) = \det R \det Q = \det Q = \det m^{T} \quad \square \\ \end{aligned} \end{equation}
Proof 2: Using co-factor expansion and induction on the dimension $n$.
coming soon
Unit Circle and Group Law

Draw line parallel to $\overline{P_1 P_2}$ and pass $O$ \[ P_1 \oplus P_2 = P_3 \]
Circle Inversion

\[ \begin{aligned} r^2 = \overline{OP'} \oplus \overline{OP} \\ \frac{r}{\overline{OP'}} = \frac{\overline{OP}}{r} \end{aligned} \] if $\overline{OP'} = 0$, then it maps to $\infty$
Transition Map and Diffeomorphism

Injective
If $f(x_1) = f(x_2) \in Y$ then $x_1 = x_2 \in X$ $ \Rightarrow f(x)$ is injective
Injective means there is unique value $x \in X$ for each $y \in Y$
Subjective
If $\forall y \in Y \quad \exists x \in X $ such as $f(x) = y \Rightarrow f(x)$ is subjective.
Subjective means each $y \in Y$ there always exists $x \in X$ such that $f(x) = y$
Continuous Function
Function $f : X \rightarrow Y$ is continuous if followings are True
$\exists \, \epsilon > 0 \in X$ such that $\exists \, \delta > 0 \in Y$ and $|\, f(x + \epsilon)\,| < \delta$