Brainstorming a new page
The equations below are the beginning of pages that may or may not get added to my website. All pages after Modular arithmetic and this one started here in the brainstorm page. I got (almost none of) these from copying things from a piece of paper (actually, just the one below) that had potential to become a digital page. And finally, the titles have a question mark if they are my best guess for the titles of the page, no question mark if I made the page and know the title, a question mark if I never published the page, but still want the world to see it, and extra small if is just one puzzle, but didn't fit anywhere and/or was worth making a dedicated page. Also, happy eclipse day! The eclipse was worth the \(10\) (total) hours of driving.
galois theory/group theory/ring theory?¶
A field of numbers is a collection of numbers where you can add, subtract, multiply, and even sometimes divide two numbers (as long as you don't divide by zero) in that field to get another number in that field. For example, the rationals, the reals and the complex numbers are all fields that are infinite and you can divide them. The complex lattice points (which are complex numbers of the form \(integer + integer \cdot i\) ), the matrices (insert joke here), and the integers, are all infinite but can not always be divided. I know what you might be thinking: "what the stand-in-for-a-curse-word is a non-infinite field?". A non-infinite field (or finite field) is something like the modulo numbers. You might not be able to divide in a non-prime base, but you can do it in a prime base. So that completes the venn diagram!
Today, I want to show you an infinite field where you can not divide, I'm talking (well, typing) about numbers of the form
for integers \(a\) and \(b\).
Side note! A number of this form can only be written in one way. Not because \(\sqrt{7}\) is an imaginary number, but because \(\sqrt{7}\) is irrational. End of side note.
You can probably take my word for it that you can add, subtract, and multiply these numbers to get another one, but here's a proof (which was the only thing on that was on the paper that inspired this (digital) page. Other than division and square roots).
.
why can't you multiply two vectors? (all endings)?¶
geometric algebra¶
Yeah, I forgot all the endings to multiplying vectors, so I came up with the previous one. Well, I guess I had this one, but I would rather make it into its own page. After watching A Swift Introduction to Geometric Algebra (literally, that was the name), I thought that (if it is a scalar plus a bivector), than it is just a scalar plus a vector times \(i\) (or \(-i\), I am not sure yet), but I will call it \(U\) instead. But first, here's the definition of multiplying two basis vectors (all the alternative endings probably just had alternative definitions for this. I think that I remember the definition that multiplication was anticommutative, which would lead me to the cross product): The product of a basis vector \(e_1\) and it self is \(0\), and the product of two basis vectors \(e_1\) and \(e_2\) equals \(-e_2 e_1\) equals \(e_3\). This means that you can do this at any point in the product of basis vectors.
(insert text here)
(insert text here)
(insert text here)
(insert text here)
(insert text here)
(insert text here)
(insert text here)
(insert text here)
\(100\) Lines btw.
Puzzle time! Prove that \(U \vec{v} = \vec{v} U\).
.
\(2 \cdot 2 \cdot 2 \cdot 2 \cdot 2 \cdot 2 \cdot 2\) lines!
.
.
Also, the cross product only works in \(3d\) while this \(\hat{i} \hat{j} \hat{k} \text{ } \vec{u} \times \vec{v}\) thing works in any dimension. This operator actually has a name (well, two names), the outer product (as opposed to the dot product sometimes referred to as the inner product) or wedge product for its appearance as a wedge unicode character. This more general cross product is written \(\vec{u} ∧ \vec{v}\)
Maxwell's equation (singular)¶
\(\frac{1}{\vec{v}}\)¶
(insert text here)
(insert text here)
(insert text here)
(insert text here)
(insert text here)
And perfect timing, it is exactly \(200\) lines.
mini quadratic¶
How this page works is that I count up in binary such that if the last digit is one, it has a constant term, if the second to last digit is one, it has a linear term, if the third to last digit is one, it has a quadratic term. Starting at \(0\) with the number above.
.
000
001
010
011
100
\(300\) Lines.
101
110
111
line through two points?¶
multivector times tables¶
\(0\) d
\(1\) d
\(2\) d
(insert text here)
(insert text here)
(insert text here)
(insert text here)
(insert text here)
(insert text here)
(insert text here)
(insert text here)
(insert text here)
\(400\) lines.
\(3\) d
set theory/logic (definitions)¶
.
set theory (definitions from those definitions)?¶
.¶
Was recursion in the rule book? I guess so.
set theory (numbers)¶
set theory proofs?¶
No.
bayes'?¶
counting in binary¶
Binary is a way to count where instead of the places: \(1\), \(10\), \(100\), \(1,000\), and so on, you use the \(1\), \(2\), \(4\), \(8\), \(16\), \(32\), \(64\), \(128\), \(256\), \(512\), \(1,024\), \(2,048\), \(4,096\), \(8,192\), \(16,384\), \(32,768\), \(65,536\), \(131,072\), \(262,144\), and so on for place names. (Also, if you were using binary to begin with, those would be \(1\), \(10\), \(100\) and so on.) So the numbers \(1-10\) would be \(|\), \(|.\), \(||\), \(|..\), \(|.|\), \(||.\), \(|||\), \(|...\), \(|..|\), \(|.|.\) (Period.) But how would you pronounce this?
here's a thing that I worked on for hours, but did not want to delete¶
(insert text here)
(insert text here)
(insert text here)
(insert text here)
(insert text here)
(insert text here)
(insert text here)
(insert text here)
(insert text here)
(insert text here)
(insert text here)
(insert text here)
\(600\) lines.
ncuomdbee rr etphoe#o#r#y¶
def SmallestPrimeDivisor(n):
k = 2
while n % k != 0:
k += 1
return k
def vec(n):
primes = []
for i in range(2, n + 1):
if SmallestPrimeDivisor(i) = i:
primes.append(i)
AlmostResult = []
for i in range(len(L)):
p = l[i]
power = 0
k = n
while k % p = 0:
k /= p
power += 1
AlmostResult.append(power)
k = 1
while AlmostResult[len(AlmostResult) - k] = [0]:
k += 1
result = []
for i in range(len(AlmostResult) - k + 1):
result += AlmostResult[i]
return result
lambda simplification¶
\(700\) lines, this page might beat the code repo page and become the new longest page on the website.
(unfinished) cogputer¶
$0 \to $ | \(1 \to 0\) | \(2 \to 1\) | \(3 \to 7\) | \(4 \to 2\) | $5 \to $ | \(6 \to 8\) | $7 \to $ | \(8 \to 3\) | \(9 \to 14\) |
---|---|---|---|---|---|---|---|---|---|
00 | 01 | 02 | 03 | 04 | 05 | 06 | 07 | 08 | |
00 | 01 | 02 | 04 | 08 | 16 | 32 | 64 | 03 | 06 |
10 | 24 | 00 | 00 | 00 | 09 | 00 | 00 | 00 | 00 |
20 | 00 | 27 | 00 | 00 | 00 | 00 | 00 | 00 | 81 |
30 | 00 | 00 | 00 | 00 | 00 | 00 | 00 | 00 | 00 |
40 | 00 | 00 | 00 | 00 | 00 | 00 | 00 | 00 | 00 |
50 | 00 | 00 | 00 | 00 | 00 | 00 | 00 | 00 | 00 |
60 | 00 | 00 | 00 | 00 | 00 | 00 | 00 | 00 | 00 |
70 | 00 | 00 | 00 | 00 | 00 | 00 | 00 | 00 | 00 |
80 | 00 | 00 | 00 | 00 | 00 | 00 | 00 | 00 | 00 |
90 | 00 | 00 | 00 | 00 | 00 | 00 | 00 | 00 | 00 |
logic to set theory¶
\(777\) Lines.
\(800\) Lines.
projective geometry¶
Credit (even if it is very small): The two points that lie on every circle (???) #SoME3, Putting Algebraic Curves in Perspective, and Extraordinary Conics: The Most Difficult Math Problem I Ever Solved.
Yes, I know, the last one was added literally \(4\) days ago, but I thougt of something else to talk about.
Let's say that a point \((a: b)\) (as opposed to \((a, b)\)) is equal to \((ca: cb)\) \((c \ne 0)\), so every* (and that's a big asterisk) point \((a: b)\) can be scaled onto \((\frac{a}{b}: 1)\), a kind of number line.
*unless \(b = 0\), then we add this kind of "point at infinity" to our number line (it's a single point because the point \((a: 0)\) can be scaled to \((1: 0)\) (aka the point at infinity), that is, of coarse, unless \(a = 0\), but that point isn't really aloud for the same reason as \(\frac{0}{0}\)) making it the real projective line or \(ℝ \text{P}^1\).
The reason why it's at infinity is because, if you consider the point \((1: 1)\), it falls onto \(1\) on the number line, the point \((1: \frac{1}{2})\) falls onto \(2\), the point \((1: \frac{1}{4})\) falls onto \(4\), the point \((1: \frac{1}{8})\) falls onto \(8\), and as the second number gets smaller, the point on the number line gets bigger approaching infinity, hence the name "point at infinity". But, if you instead do this from the other direction, it approaches negative infinity. You can imagine a number line that curves down as it goes along, consecutive integers getting closer and closer, and an unsigned infinity at the bottom where the line meets itself.
Stepping a dimension up, you get the real projective plane or \(ℝ \text{P}^2\), \((a: b: c) = (da: db: dc)\), most numbers going to \((\frac{a}{c}: \frac{b}{c}: 1)\), some becoming \((\frac{a}{b}: 1: 0)\), less becoming \((1: 0: 0)\), the point at infinity becomes a line at infinity (more of a circle, but \(1\) degree of freedom, so it's a line), and the number line becomes a space of all points.
There is a problem though (that is big enough to be explained on a line by itself), you could imagine the same process that I used to prove the unsigned infinity thing but in \(ℝ \text{P}^2\) to get the unsurprising result of \((a: b: 0) = (-a: -b: 0)\). This does mean that, when drawing the regular or affine plane, and drawing a circle around it (to represent the line at infinity of coarse), if you wanted to draw, say, the point \((1: 1: 0)\), it would need to be at both the very top right, and the very bottom left of the circle.
To see why this double counting thing makes sense, I'll project onto a unit sphere, so, if \(r = \sqrt{a^2 + b^2 + c^2}\), the point \((a: b: c)\) maps to \((\frac{a}{r}: \frac{b}{r}: \frac{c}{r})\). You might see the problem though, it also maps to \((-\frac{a}{r}: -\frac{b}{r}: -\frac{c}{r})\), because it's also on the unit sphere. So, if you just consider the top half of the sphere (including the equator so that points at infinity are accounted for), it counts almost every point once, and points at infinity twice, kinda like the one where we projected onto the plane parallel to and one unit above the \(xy\) plane. So, to fix this problem, and give every point the same treatment, you (counterintuitively) count every point twice by using the entire sphere, kinda like giving every line in \(2d\) an angle instead of a slope to fix the vertical lines problem, at the cost of there being two angles for every line.
Yes, I know, the plane is placed one unit below the sphere instead of one above, but it's only like that for the sake of demonstration.
duality¶
It's hard to explain how points are dual to lines, but an example would be the origin and the line at infinity, or on the sphere, the equator and the north and south poles (remember, two solutions). The more general definition would be something like this: the two points on a sphere, a point on the dual line, and the point \(90°\) away but still on the dual line are all mutually perpendicular. By the way, points on the plane project to antipodal points on the sphere, and lines on the plane project to great circles on the sphere.
Also fun fact: the duals of every point on a line would all pass through the dual point, and the duals of every line that passes through a point would all lie on the dual line.
linear algebra, complex numbers, and higher dimensional complex numbers?¶
You can also multiply vectors, but that's a story for another day.
\(900\) Lines.
Complex numbers are, if I'm gonna quote Morphocular in this video, the language of \(2d\) rotation. I'll describe them in an unusual way:
If you want a better rotation formula, here it is!
homogenization¶
Homogenization is a method of interpolation from equations on the affine plane (non-projective plane) to equations on the projective plane (so, adding the line at infinity), but I think it would be better if I just showed how to do it.
Let's say that I have these equations for describing my line:
so, we have this equation:
and from those, I have this new equation for describing my line:
Now, the equation is homogeneous \(^1\).
In (this \((x, y, z) = (cx, cmx + cb, c)\)) equation for a line, \(z\) could not equal \(0\), but now, \(z\) can equal \(0\), and if \(z = 0\), then it's at the line at infinity, so these \(z = 0\) solutions snuck in as a result of homogenization, mission success!
\(^1\) That is, a polynomial where each term has the same degree. There is a much easier way of doing this called homogenization: you take each term whose degree is not the max, and add factors of \(z\) to bring the degree up to the max.
But what are these solutions at the line at infinity?
This has some pretty cool implications, but I'll do that tomorrow.
Oh, look, it's tomorrow, time to tell you the implications.
\(1000\) Lines, wow.
Y'know how any two distinct points on the affine plane have a line through them? And how (almost) any two distinct lines on the affine plane have a point on both? That is, of course, unless the lines are parallel. Solution: homogenization. A homogenized line with slope \(m\) has the point \((1: m: 0)\) (and \((0: 1: 0)\) if the line is vertical). So, if two lines have the same slope \(m\) (and are distinct), then they don't meet normally, and they intersect at \((1: m: 0)\). If they have different slopes, then they do meet normally, and they don't intersect at the line at infinity. But what about the "any two distinct points have a line through them" rule? If you have a normal point and a point at infinity \((1: m: 0)\), they have the line with slope \(m\) going through the first one. But what if you have two points on the line at infinity? This (among other things) is why it's called the line at infinity, a line that all points at infinity lie on.
greek letters¶
Alpha | Beta | Gamma | Delta | Epsilon | Zeta | Eta | Theta | Iota | Kappa | Lambda | Mu | Nu | Xi | Omicron | Pi | Rho | Sigma (yes, actualy) | Tau | Upsilon | Phi | Chi | Psi | Omega |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
\(\alpha\) | \(\beta\) | \(\gamma\) | \(\delta\) | \(\epsilon\) | \(\zeta\) | \(\eta\) | \(\theta\) | \(\iota\) | \(\kappa\) | \(\lambda\) | \(\mu\) | \(\nu\) | \(\xi\) | \(\omicron\) | \(\pi\) | \(\rho\) | \(\sigma\) | \(\tau\) | \(\upsilon\) | \(\phi\) | \(\chi\) | \(\psi\) | \(\omega\) |
\(\Alpha\) | \(\Beta\) | \(\Gamma\) | \(\Delta\) | \(\Epsilon\) | \(\Zeta\) | \(\Eta\) | \(\Theta\) | \(\Iota\) | \(\Kappa\) | \(\Lambda\) | \(\Mu\) | \(\Nu\) | \(\Xi\) | \(\Omicron\) | \(\Pi\) | \(\Rho\) | \(\Sigma\) | \(\Tau\) | \(\Upsilon\) | \(\Phi\) | \(\Chi\) | \(\Psi\) | \(\Omega\) |
A | B | E | Z | H | I | K | M | N | O | P | T | X |
cursed math¶
fixed point combinators¶
When I say "fixed point combinator", what I really mean is a combinator \(p\) such that \(f(p(f)) = p(f)\). The term "fixed point" just means: \(x\) is a fixed point of \(f\) if and only if \(f(x) = x\). The puzzle of constructing your own fixed point combinator is a puzzle found in the lambda paper after showing you the \(\text{Y}\) combinator and the older turing fixed point combinator \(\Theta\). In both of them, \(p(f)\) reduces to \(f(p(f))\). Here's a proof:
But this challenge of making your own fixed point combinator is really easy (I'll use the Theta combinator as an example): first, we need a combinator that reduces to itself, a self referential combinator (such as \(\Omega\) or \(\text{M} (\text{M})\) or \((\lambda x. x(x))(\lambda x. x(x))\), they're all the same thing. Actually, I'm gonna re derive The Omega). And for that, we need a form, where a form has some \(f\)s, maybe \(x\)s and \(y\)s where it is \(f\) of single things (so no \(f(x(y))\)s), one of which is another \(f\). The one that I'm gonna use (and the simplest one) is \(f(f)\). To make a self referential combinator out of this, we're gonna need to make a combinator \(A\) where \(A(A)\) reduces to itself. That is, \(A\) of all the given inputs (just \(A\)) returns \(A(A)\). So \(A\), if you exaluate it on \(A\), you get \(A(A)\). So \(A\) must be the self application combinator \(\lambda x. x(x)\). To turn this self referential combinator into a fixed point combinator, you just need to make \(A(A)(f)\) equal to \(A(A)(f)\). But, to avoid confusion with the original \(A\) (not the one in the turing fixed point combinator), I'll call it \(B\). First, as a starting point, \(B\) should equal \(A\) but with one more input (so \(B = \lambda xy. x(x)(y)\)). And, now that \(B\) can factor in \(f\), we can make \(B\) of \(B\) and \(f\) output \(f(B(B)(f))\). But this is easy, just change the definition of \(B\) to \(\lambda xy. y(x(x)(y))\). And we now have \(\Theta\).
linear systems of equations¶
\(1102\) Lines.
linear algebra¶
This will be a series of subchapters about linear algebra. In particular, the more general mathmetician's version. But if you want some more intuition about how it works, each subchapter will have a corrasponding part in this playlist. The first thing to do in linear algebra is to...
part \(1\): choose your fighte- vector space!¶
As you might know, the main thing in linear algebra is the vector. so, to make this as general as possible, I'm gonna let you make your own vector space (space in which vectors live). Something important that defines a vector is that there's a sense of vector \(\vec{u} + \vec{v}\) (for vectors \(\vec{u}\) and \(\vec{v}\)) and there's a sense of vector \(c \vec{v}\) (for vector \(\vec{v}\) and scalar (real number) \(c\)). But, you cannot (necessarily) multiply vectors or add vectors and scalars (unless you use geometric algebra). But, for something to quallify as a vector space, there are some more rules/axioms it has to follow: (assume that your vector space is denoted as \(\text{V}\) with vectors \(\vec{u}\), \(\vec{v}\), \(\vec{w}\) and scalars \(x\), \(y\), and \(z\))
Rule #\(1\):
\(\vec{u} + (\vec{v} + \vec{w}) = (\vec{u} + \vec{v}) + \vec{w}\)
Rule #\(2\):
\(\vec{u} + \vec{v} = \vec{v} + \vec{u}\)
Rule #\(3\) with words:
There is a vector \(\vec{0}\) aka "the zero vector" such that \(\vec{v} + \vec{0} = \vec{v}\) for all \(\vec{v}\)
Rule #\(3\) with set theory:
\(\exists \vec{0} \in \text{V}. ∀ \vec{v} \in \text{V}. \vec{v} + \vec{0} = \vec{v}\)
Rule #\(4\) with words:
For any \(\vec{v}\) there is a \(-\vec{v}\) such that \(\vec{v} + (-\vec{v}) = \vec{0}\) for all \(\vec{v}\)
Rule #\(4\) with set theory:
\(∀ \vec{v} \in \text{V}. \exists -\vec{v} \in \text{V}. \vec{v} + (-\vec{v}) = \vec{0}\)
Rule #\(5\):
\(x(y \vec{v}) = (xy) \vec{v}\)
Rule #\(6\):
\(1 \vec{v} = \vec{v}\)
Rule #\(7\):
\(x(\vec{u} + \vec{v}) = x \vec{u} + x \vec{v}\)
Rule #\(8\):
\((x + y) \vec{v} = x \vec{v} + y \vec{v}\)
The vector space that (at least to me) makes all of the intuition click is arrows in space where it's the same if it has the same length and direction (hence the little arrow over every vector). The result of adding two of them is putting the base of the second on the tip of the first and drawing a new arrow from the base of the first to the tip of the second. The result of multiplying one of these by a number is scaling the length by a factor of the number (hence the name) and flipping the vector and scaling the length by a factor of the absolute value of the number if it is negative. You can convince yourself that this is a vector space. Also, these sorts of vectors are usually rooted at the origin.
another commonly used definition of a vector is that of lists of numbers. The result of adding two of them is adding them term by term and the result of multiplying one of these by a number is multiplying each term by said number. You can convince yourself that this is a vector space.
You can convert from the first definition to the second by making a list of the vector's coordinates and doing the opposite to convert from a list of numbers to an arrow.
Now that you have chosen a vector space, we can now move on to...
part \(2\): linear combinations, span, and basis vectors¶
In \(2d\) (arrows of length \(1\) or list of two numbers) there are vectors that will prove to be very important. The first being called \(\hat{x}\) (x hat), the unit vector pointing to the right (in the direction in the \(x\) axis) or the list of numbers \(\begin{bmatrix} 1 \\ 0 \\ \end{bmatrix}\) (vectors that are of length \(1\) are denoted with a hat) and \(\hat{y}\) (y hat) the unit vector pointing up (in the direction in the \(y\) axis). AKA the list of numbers \(\begin{bmatrix} 0 \\ 1 \\ \end{bmatrix}\).
If you think about it, any \(2d\) vector \(\begin{bmatrix} x \\ y \\ \end{bmatrix}\) can be written in terms of \(\hat{x}\) and \(\hat{y}\) (i.e. \(x \begin{bmatrix} 1 \\ 0 \\ \end{bmatrix} = \begin{bmatrix} x \\ 0 \\ \end{bmatrix}\), \(y \begin{bmatrix} 0 \\ 1 \\ \end{bmatrix} = \begin{bmatrix} 0 \\ y \\ \end{bmatrix}\), \(\begin{bmatrix} x \\ 0 \\ \end{bmatrix} + \begin{bmatrix} 0 \\ y \\ \end{bmatrix} = \begin{bmatrix} x \\ y \\ \end{bmatrix}\)).
\(1200\) Lines.
Alternatively (and you might need a grid for this), you can take the unit vector in the x direction and scale it until it's tip is on the same vertical line as the tip of your vector and the same thing with the y direction. And, when you add them up, (you can deduce that) you get your original vector. This can be used as another way to go back and forth between the two definitions of a vector.
By the way, this is called a linear combination of \(\hat{x}\) and \(\hat{y}\) (linear because if you fix one and vary the other, it traces out a line if you look at the tip of the result).
For this reason that every \(2d\) vector can be made out of \(\hat{x}\) and \(\hat{y}\), they are called the basis vectors.
Also, Every Vector that can be formed by adding and scaling \(\hat{x}\), $\hat{y}, and any other vector formed in this way is of the form \(a \hat{x} + b \hat{y}\), and the reason why is because \((a \hat{x} + b \hat{y}) + c \hat{x} + d \hat{y} = (a + c) \hat{x} + (b + d) \hat{y}\) and \(c(a \hat{x} + b \hat{y}) = (a) \hat{x} + (cb) \hat{y}\).
Also by the way, \(a \vec{u} + b \vec{v}\) is called a linear combination of \(\vec{u}\) and \(\vec{v}\).
But this begs the question: we could've used any other two basis vectors and we would've gotten another completely sensible way of going back and forth between the two definitions of a vector. That is, of course, unless the two vectors that are aligned with each other (or are both the zero vector).
By the way, the set of all the vectors that can be made with a linear combination of two vectors is called the span of those two vectors. This idea of using different basis vectors, aka a different basis, is something that I'll go much more in detail about later.
Also, if you have just one vector, think of it as an arrow, but if you have many vectors, think of each of them as a point where the point lies at the tip of the vector.
But, things get more interesting in \(3d\), now it's \(a \vec{u} + b \vec{v} + c \vec{w}\) for scalars \(a\), \(b\), \(c\). And if the third is in the span of the other two, it doesn't change the span and it's still a flat sheet cutting through the origin.
You can imagine the first two forming a plane and then the third one moving the plane around sweeping it through space. Another intuition is that you're using all three scalars to your advantage, you can't replace one of them with the other two.
Whenever you can remove a vector without changing its span it is also known as linearly dependent, but \(\hat{z}\) signed the declaration of independence ~\(250\) years ago, so they span all of \(3d\) space.
So, the more formal definition of a basis is a set of linearly independent vectors that span all of space.
part \(3\): matrices and linear transformations¶
Let's start off this part with a quote:
\(1234\) lines.
No one really understands The Matrix, you just have to see for yourself
-Morpheus
Jokes Aside, for this part I'm going to be talking about linear transformations. Transformation is just a fancy word for function (In this context, it's a function that inputs and outputs vectors), but what makes it linear is that it preserves the two operations of vector addition and scalar multiplication, that is, \(L(\vec{u} + \vec{v}) = L(\vec{u}) + L(\vec{v})\) and \(L(c \vec{v}) = c L(\vec{v})\) (I'll explain why the word linear is used later).
But, if you were given one of these guys, how would you describe it numerically? What is \(L(\vec{v})\)?
Well, describe \(\vec{v}\) as a linear combination of \(\hat{x}\) and \(\hat{y}\), so \(v_x \hat{x} + v_y \hat{y}\)
This is why it's called a linear transformation, \(L(\vec{v})\) is a linear combination of \(L(\hat{x})\) and \(L(\hat{y})\)
So, literally all you need to define a (\(2d\)) linear transformation is where \(\hat{x}\) and \(\hat{y}\) each go.
Here's a concrete example: let's say that the transformation applied to \(\hat{x}\) is \(\begin{bmatrix} 1 \\ -2 \\ \end{bmatrix}\) and the transformation applied to \(\hat{y}\) is \(\begin{bmatrix} 3 \\ 0 \\ \end{bmatrix}\), then the transformation applied to \(-1 \hat{x} + 2 \hat{y}\) should be \(-1 \begin{bmatrix} 1 \\ -2 \\ \end{bmatrix} + 2 \begin{bmatrix} 3 \\ 0 \\ \end{bmatrix} = \begin{bmatrix} (-1)(1) + (2)(3) \\ (-1)(-2) + (2)(0) \\ \end{bmatrix} = \begin{bmatrix} 5 \\ 2 \\ \end{bmatrix}\)
Ok, got all that?
In general, this transformation applied to \(\begin{bmatrix} x \\ y \\ \end{bmatrix}\) is \(\begin{bmatrix} 1x + 3y \\ -2x + 0y\). You give me any vector and I tell you the output vector.
What I'm saying is that the linear transformation \(L\) is completely determined by four numbers: the \(x\) coordinate of the transformed \(\hat{x}\), the y coordinate of the transformed \(\hat{x}\), the \(x\) coordinate of the transformed \(\hat{y}\), and the y coordinate of the transformed \(\hat{y}\).
Usually how you write a linear transformation is with a \(2x2\) group of numbers, also called a called a \(2x2\) matrix. You can read off the first column as where \(\hat{x}\) goes and the second as where \(\hat{y}\) goes.
By the way, a matrix \(A\) times a vector \(\vec{v}\)
If you're given a matrix describing a linear transformation and you're also given some specific vector and you want to compute the linear transformation evaluated on said vector, you multiply the coordinates of the vector by the columns of the matrix and adding up the results.
Here's a concrete example:
What about the most general possible example of matrix vector multiplication:
\(1300\) Lines.
You could even use this formula as a definition. And then you could teach it to high schoolers worldwide and not teach them the key intuition that makes it intuitive (\(x \begin{bmatrix} a \\ c \\ \end{bmatrix} + y \begin{bmatrix} b \\ d \\ \end{bmatrix}\))
Isn't it better to think of the columns of the matrix as where \(\hat{x}\) and \(\hat{y}\) each go and the result of multiplying a matrix by a vector as the appropriate linear combination?
How would you describe a linear transformation like a 90° counterclockwise rotation? (Yes, that is a linear transformation.) Well, \(\hat{x}\) gets shifted up towards \(\begin{bmatrix} 0 \\ 1 \\ \end{bmatrix}\) (\(\hat{y}\)) and \(\hat{y}\) gets rotated down towards \(\begin{bmatrix} -1 \\ 0 \\ \end{bmatrix}\) (\(-\hat{x}\)). So the result should be the matrix \(\begin{bmatrix} 0 & -1 \\ 1 & 0\), and if you want to rotate any vector clockwise by 90 degrees, just multiply it by the matrix \(\begin{bmatrix} 0 & -1 \\ 1 & 0\).
On the other hand, if the two columns are linearly dependent, the transformation squishes all of space onto one line, the span of the two linearly dependent columns.
Summary:
linear transformations are those that preserve the operations of vector addition and scalar multiplication, of which you can think of as transformations of space that keep the grid lines parallel and evenly spaced with the origin remaining fixed. But to describe your linear transformation, you only need a handful of numbers: the coordinates of where the basis vectors land. matrices give us a language for linear transformations: just read off the columns and you'll know where the basis vectors land. And matrix vector multiplication just tells you what the linear transformation does to a given vector.
linear systems of equations (but with sum notation)¶
\(1400\) lines.
solutions to said linear systems of equations¶
conclutions¶
conclution #1:
\(1516\) Lines
conclution #2:
general matrix multiplication¶
\(1 + 1\) and the foundations of modern mathematics¶
Before I start, none of this was scripted.
What I want to do here is prove that \(1 + 1 = 2\). But most of the time, you find yourself in a loop of defining things. For example: what is \(1\)? \(1\) is the successor of \(0\). What is the successor operation? The successor operation is the function that-
\(1600\) Lines.
"Stop right there!" Said person #\(2\), "What is a function?".
"Okay, fine!" Said person #\(1\), "I'll instead say that \(1\) is the set that contains \(0\)." (written \(\{ 0 \}\).)
"That's better, but can you express \(\{ 0 \}\) more formally?" Said person #\(2\).
"Well, what I mean by that is: \(∀x.x \in \{ 0 \} \iff x = 0\)." Said person #\(1\). (For any/all \(x\), \(x\) is within \(\{ 0 \}\) precisely when \(x = 0\).)
"But what is \(=\)?" Said person #\(2\).
"Axiom #\(1\) of ZFC: the axiom of extensionality" said person #\(1\), "it states that \(S = T\) precisely when for any \(s \in S\), \(s \in T\), and for any \(t \in T\), \(t \in S\)"
"Okay, so what you're saying is that \(∀x.x \in 1 \iff (∀y.y \in x → y \in 0) ∧ (∀y.y \in 0 → y \in x)\)." Said person #\(2\).
"Yes. Is there anything else left undefined?" Said person #\(1\).
"Yes, always!" Said person #\(2\). "What is \(0\)?"
"Are you really gonna make me answer that?" Said person #\(1\).
"Yes!" Said person #\(2\).
"Okay, fine!" Person #\(1\) said with frustration. "\(0\) is Ø" (the empty set) "is the set with nothing in it, so \(¬∃x.x \in Ø\)."
"So, what you really meant by \(∀x.x \in 1 \iff (∀y.y \in x → y \in 0) ∧ (∀y.y \in 0 → y \in x)\) was \(∀x.x \in 1 \iff ¬∃y.y \in x\)." Said person #\(2\)
"Yes!" Said person #\(1\).
"So \(1\) is the set of all empty sets, of which there are only one" Said person #\(2\) "am I understanding this correctly?" Said person #\(2\)
"Yes!" Said person #\(1\).
I'm tired of this conversation between a mathematician and probably a mathematical snob who only accepts the truest logical statements crafted from pure mathematical set theory.
I feel like doing something else, how about Russel's paradox? It states that there is no set that conain only sets that don't contain themselves.
Next: the first axiom of set theory (The Axiom of Extensionality). It states that two sets are equal if they have the same elements, but I think it actually means that if two sets are equal (i.e. they have the same elements), a set cannot contain just one of them, it has to contain either both or neither of the sets.
Next: the second axiom of set theory (The Axiom of Foundation). It states that every set must have an element disjoint from itself (i.e. an element where the union of that element and the original set is empty (i.e. they don't have any common elements)).
Next: the third axiom of set theory (The Axiom of Pairing). Actually, I'm not going to use the axiom of pairing, I'm going to use the closely related singleton axiom, It states that if you have a set then there exists the set containing that set, as opposed to the axiom of pairing which says that if you have two sets then there is a set containing both of them. These two statements are equivalent, but I prefer the first one.
Also I realized that this axiom makes the axiom a regularity redundant. Let's say that \(S = \{ S \}\). then you would say that \(S\) is a set because it is equal to the set containing \(S\). So we would also need to assume that \(S\) is a set for that to work, so that would mean that we need to prove that \(S\) is a set, so that would mean that we need to prove that \(S\) is a set, you just never get to the bottom of it and you can never declare that \(S\) is a set.
Also this is a weird kind of axiom because it doesn't always make the set containing a set into a set, you still have to prove it with the other rules. So we just knocked out two axioms of set theory with one stone.
\(\text{Symm}_4\)¶
I'm going to use a notation and this is how it works: I will notate \((1, 2, 3)(4, 5)\) as the function on a string that brings the first term to the second place in the string, the second term to the third place in the string, and the third thing back to the first term in the string. And then also swapping the fourth and fifth terms. And also I'm going to do composition from left to right instead of from right to left, so the composition of \(f\) and \(g\) does \(f\) first then \(g\).
Here's a chart of each permutation multiplied by each other permutation of four elements:
\(\times\) | () | (1, 2) | (2, 3) | (1, 2, 3) | (1, 3, 2) | (1, 3) | (3, 4) | (1, 2)(3, 4) | (2, 3, 4) | (1, 2, 3, 4) | (1, 3, 4, 2) | (1, 3, 4) | (2, 4, 3) | (1, 2, 4, 3) | (2, 4) | (1, 2, 4) | (1, 3)(2, 4) | (1, 3, 2, 4) | (1, 4, 3, 2) | (1, 4, 3) | (1, 4, 2) | (1, 4) | (1, 4, 2, 3) | (1, 4)(2, 3) |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
() | () | (1, 2) | (2, 3) | (1, 2, 3) | (1, 3, 2) | (1, 3) | (3, 4) | (1, 2)(3, 4) | (2, 3, 4) | (1, 2, 3, 4) | (1, 3, 4, 2) | (1, 3, 4) | (2, 4, 3) | (1, 2, 4, 3) | (2, 4) | (1, 2, 4) | (1, 3)(2, 4) | (1, 3, 2, 4) | (1, 4, 3, 2) | (1, 4, 3) | (1, 4, 2) | (1, 4) | (1, 4, 2, 3) | (1, 4)(2, 3) |
(1, 2) | (1, 2) | |||||||||||||||||||||||
(2, 3) | (2, 3) | |||||||||||||||||||||||
(1, 2, 3) | (1, 2, 3) | |||||||||||||||||||||||
(1, 3, 2) | (1, 3, 2) | |||||||||||||||||||||||
(1, 3) | (1, 3) | |||||||||||||||||||||||
(3, 4) | (3, 4) | |||||||||||||||||||||||
(1, 2)(3, 4) | (1, 2)(3, 4) | |||||||||||||||||||||||
(2, 3, 4) | (2, 3, 4) | |||||||||||||||||||||||
(1, 2, 3, 4) | (1, 2, 3, 4) | |||||||||||||||||||||||
(1, 3, 4, 2) | (1, 3, 4, 2) | |||||||||||||||||||||||
(1, 3, 4) | (1, 3, 4) | |||||||||||||||||||||||
(2, 4, 3) | (2, 4, 3) | |||||||||||||||||||||||
(1, 2, 4, 3) | (1, 2, 4, 3) | |||||||||||||||||||||||
(2, 4) | (2, 4) | |||||||||||||||||||||||
(1, 2, 4) | (1, 2, 4) | |||||||||||||||||||||||
(1, 3)(2, 4) | (1, 3)(2, 4) | |||||||||||||||||||||||
(1, 3, 2, 4) | (1, 3, 2, 4) | |||||||||||||||||||||||
(1, 4, 3, 2) | (1, 4, 3, 2) | |||||||||||||||||||||||
(1, 4, 3) | (1, 4, 3) | |||||||||||||||||||||||
(1, 4, 2) | (1, 4, 2) | |||||||||||||||||||||||
(1, 4) | (1, 4) | |||||||||||||||||||||||
(1, 4, 2, 3) | (1, 4, 2, 3) | |||||||||||||||||||||||
(1, 4)(2, 3) | (1, 4)(2, 3) |
\(1717\) Lines.
I don't feel like filling out the rest of the \(529\) entries of this table. Instead I'll ask my dad to write some code to do it for me, and then use that information to create a new times table.
magic squares¶
If you don't know, a magic square is a square of numbers, typically \(3x3\), where the sum, for the sum of each row, each column, and the two diagonals are all equal. Particularly, what I would like to do here is find how many numbers it takes to define an affine magic square. That is, one without the diagonal requirement, because then if you rotate it, or move the first column to the end, or the first row to the end, or swap to rows or two columns, then it will still work.
a puzzle¶
Okay, here's the puzzle: you are greeted by \(10\) boxes that contain a random number from \(0\) to \(1\) and you want to get the biggest one. So what you can do is you open the first box and you can either choose to take it or leave it, and if you leave it, then you can never come back to it. What is your strategy for getting the highest number? And how does that strategy scale for more or fewer boxes?
According to a veritasium video about the number \(37\), the answer is to check the first \(37\)%, rejects all of those, and then after that you choose one that's bigger than the biggest one in those first \(37\)%, choose that one, and if those first \(37\)% contained the biggest one (which happens 37% of the time), then tough luck!
I thought: what if you didn't choose some boxes to reject always and then choose the best one after that? Then I was thinking: what is the perfect strategy?
Well, for one box it's really obvious: just choose that box. For two boxes, it also seems kind of obvious: if the first one is more than one half, then do it, and if it's less than one half, pass it.
What about three boxes? Well, I know that if I somehow get down to two, then I know a strategy. So at what threshold \(t\) should I keep it? Well, what's the expected value for two of them? Half of the time it's less than \(\frac{1}{2}\) and you have to skip it, with an average value for the other one being \(\frac{1}{2}\), but half of the time it's more than \(\frac{1}{2}\) and the average is \(\frac{3}{4}\). So if you compute \(\frac{1}{2} \frac{1}{2} + \frac{1}{2} \frac{3}{4} = \frac{1}{4} + \frac{3}{8} = \frac{2}{8} + \frac{3}{8} = \frac{5}{8}\). So if the value in the first box is more than \(\frac{5}{8}\), then keep it, and if it's less, skip it.
But what about a more general case? If I have \(n\) boxes then what is the threshold (call it \(f(n)\)) where if it's more than that, I should keep it, and if it's less than that, I should leave it, and if it's exactly that, then either one.
Well, I know that it's equal to the expected score for \(n - 1\) boxes (denoted \(\text{ES} (n - 1)\)), and \(\text{ES} (n)\) must be equal to the probability that a random number from \(0\) to \(1\) is less than \(f(n)\) (that is, \(f(n)\) itself), multiplied by \(\text{ES} (n - 1)\), plus the probability that it is more than \(f(n)\) (that is, \(1 - f(n)\)), and in that case, the expected value is \(\frac{1 + f(n)}{2} = \frac{1}{2} + \frac{f(n)}{2}\). So the expected score (and hence \(f(n + 1)\)) is given by:
And coupled with the results that there is no threshold for one box (i.e. the threshold is \(0\)), we now have an inductive formula for the perfect strategy.
So, the answer to the original question for \(n = 10\) is: If the first one is more than about \(0.849\), then keep it, if not, then pass it. And if the second one is more than about \(0.836\), then keep it, if not, then pass it. And if the third one is more than about \(0.82\), then keep it, if not, then pass it. And if the fourth one is more than about \(0.8\), then keep it, if not, then pass it. And if the fifth one is more than about \(0.775\), then keep it, if not, then pass it. And if the sixth one is more than about \(0.741\), then keep it, if not, then pass it. And if the seventh one is more than about \(0.695\), then keep it, if not, then pass it. And if the eighth one is more than \(0.625\), then keep it, if not, then pass it. And if the nineth one is more than \(\frac{1}{2}\), then keep it, if not, then pass it.
\(\text{Symm}_3\)¶
Last time I tried this it was too much data and my dad still hasn't finished that code. So I'm going to make a multiplication table of a more manageable size, then find all of its symmetric beauty.
\(\times\) | () | (1, 2) | (2, 3) | (1, 2, 3) | (1, 3, 2) | (1, 3) |
---|---|---|---|---|---|---|
() | () | (1, 2) | (2, 3) | (1, 2, 3) | (1, 3, 2) | (1, 3) |
(1, 2) | (1, 2) | () | (1, 3, 2) | (1, 3) | (2, 3) | (1, 2, 3) |
(2, 3) | (2, 3) | (1, 2, 3) | () | (1, 2) | (1, 3) | (1, 3, 2) |
(1, 2, 3) | (1, 2, 3) | (2, 3) | (1, 3) | (1, 3, 2) | () | (1, 2) |
(1, 3, 2) | (1, 3, 2) | (1, 3) | (1, 2) | () | (1, 2, 3) | (2, 3) |
(1, 3) | (1, 3) | (1, 3, 2) | (1, 2, 3) | (2, 3) | (1, 2) | () |
\(1776 + 4\) Lines.
Process: swap around pieces of paper labeled \(1\), \(2\), and \(3\), doing another swap, and looking at the table of what the answer should be based on the pieces of paper (see table below), and then repeating that \(25\) times.
\(123\) | () |
---|---|
\(213\) | (1, 2) |
\(132\) | (2, 3) |
\(312\) | (1, 2, 3) |
\(231\) | (1, 3, 2) |
\(321\) | (1, 3) |
So now I'm going to swap out the swaps with letters of the alphabet, and remove the trivial first row and first column.
\(\times\) | \(a\) | \(b\) | \(c\) | \(d\) | \(e\) |
---|---|---|---|---|---|
\(a\) | \(id\) | \(d\) | \(e\) | \(b\) | \(c\) |
\(b\) | \(c\) | \(id\) | \(a\) | \(e\) | \(d\) |
\(c\) | \(b\) | \(e\) | \(d\) | \(id\) | \(a\) |
\(d\) | \(e\) | \(a\) | \(id\) | \(c\) | \(b\) |
\(e\) | \(d\) | \(c\) | \(b\) | \(a\) | \(id\) |
\(1802\) Lines.
If you were wondering, I'm not using \(i\) for the identity, because in group theory, you use an \(e\). But \(e\) was already taken, so instead of using \(i\), I used \(id\), which is the standard category theory notation.
Notice any patterns? Well, the first thing that jumped out to me a few months ago was the string of \(id\)s across the diagonal, interrupted by \(c\) and \(d\).
The reason why was because my first encounter with this group was an equivalent group \(D_3\) (the dihedral group of order \(3\) or the group of all rotations and reflections of a triangle that leave the corners looking the same), as opposed to \(\text{Symm}_3\), which is the group of all ways to arrange three objects. What I noticed was that \(a\), \(b\), and \(e\) were reflections, so of course doing them twice would result in the same thing.
SIR model¶
extreme SIR model¶
\(\text{Symm}_4\) attempt \(2\)¶
Status update: my dad still hasn't finished the code so I wrote my own code. The original output is:
[0 1 2 3] | [1 0 2 3] | [0 2 1 3] | [2 0 1 3] | [1 2 0 3] | [2 1 0 3] | [0 1 3 2] | [1 0 3 2] | [0 3 1 2] | [3 0 1 2] | [1 3 0 2] | [3 1 0 2] | [0 2 3 1] | [2 0 3 1] | [0 3 2 1] | [3 0 2 1] | [2 3 0 1] | [3 2 0 1] | [1 2 3 0] | [2 1 3 0] | [1 3 2 0] | [3 1 2 0] | [2 3 1 0] | [3 2 1 0] | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
[0 1 2 3] | [0 1 2 3] | [1 0 2 3] | [0 2 1 3] | [2 0 1 3] | [1 2 0 3] | [2 1 0 3] | [0 1 3 2] | [1 0 3 2] | [0 3 1 2] | [3 0 1 2] | [1 3 0 2] | [3 1 0 2] | [0 2 3 1] | [2 0 3 1] | [0 3 2 1] | [3 0 2 1] | [2 3 0 1] | [3 2 0 1] | [1 2 3 0] | [2 1 3 0] | [1 3 2 0] | [3 1 2 0] | [2 3 1 0] | [3 2 1 0] |
[1 0 2 3] | [1 0 2 3] | [0 1 2 3] | [2 0 1 3] | [0 2 1 3] | [2 1 0 3] | [1 2 0 3] | [1 0 3 2] | [0 1 3 2] | [3 0 1 2] | [0 3 1 2] | [3 1 0 2] | [1 3 0 2] | [2 0 3 1] | [0 2 3 1] | [3 0 2 1] | [0 3 2 1] | [3 2 0 1] | [2 3 0 1] | [2 1 3 0] | [1 2 3 0] | [3 1 2 0] | [1 3 2 0] | [3 2 1 0] | [2 3 1 0] |
[0 2 1 3] | [0 2 1 3] | [1 2 0 3] | [0 1 2 3] | [2 1 0 3] | [1 0 2 3] | [2 0 1 3] | [0 3 1 2] | [1 3 0 2] | [0 1 3 2] | [3 1 0 2] | [1 0 3 2] | [3 0 1 2] | [0 3 2 1] | [2 3 0 1] | [0 2 3 1] | [3 2 0 1] | [2 0 3 1] | [3 0 2 1] | [1 3 2 0] | [2 3 1 0] | [1 2 3 0] | [3 2 1 0] | [2 1 3 0] | [3 1 2 0] |
[2 0 1 3] | [2 0 1 3] | [2 1 0 3] | [1 0 2 3] | [1 2 0 3] | [0 1 2 3] | [0 2 1 3] | [3 0 1 2] | [3 1 0 2] | [1 0 3 2] | [1 3 0 2] | [0 1 3 2] | [0 3 1 2] | [3 0 2 1] | [3 2 0 1] | [2 0 3 1] | [2 3 0 1] | [0 2 3 1] | [0 3 2 1] | [3 1 2 0] | [3 2 1 0] | [2 1 3 0] | [2 3 1 0] | [1 2 3 0] | [1 3 2 0] |
[1 2 0 3] | [1 2 0 3] | [0 2 1 3] | [2 1 0 3] | [0 1 2 3] | [2 0 1 3] | [1 0 2 3] | [1 3 0 2] | [0 3 1 2] | [3 1 0 2] | [0 1 3 2] | [3 0 1 2] | [1 0 3 2] | [2 3 0 1] | [0 3 2 1] | [3 2 0 1] | [0 2 3 1] | [3 0 2 1] | [2 0 3 1] | [2 3 1 0] | [1 3 2 0] | [3 2 1 0] | [1 2 3 0] | [3 1 2 0] | [2 1 3 0] |
[2 1 0 3] | [2 1 0 3] | [2 0 1 3] | [1 2 0 3] | [1 0 2 3] | [0 2 1 3] | [0 1 2 3] | [3 1 0 2] | [3 0 1 2] | [1 3 0 2] | [1 0 3 2] | [0 3 1 2] | [0 1 3 2] | [3 2 0 1] | [3 0 2 1] | [2 3 0 1] | [2 0 3 1] | [0 3 2 1] | [0 2 3 1] | [3 2 1 0] | [3 1 2 0] | [2 3 1 0] | [2 1 3 0] | [1 3 2 0] | [1 2 3 0] |
[0 1 3 2] | [0 1 3 2] | [1 0 3 2] | [0 2 3 1] | [2 0 3 1] | [1 2 3 0] | [2 1 3 0] | [0 1 2 3] | [1 0 2 3] | [0 3 2 1] | [3 0 2 1] | [1 3 2 0] | [3 1 2 0] | [0 2 1 3] | [2 0 1 3] | [0 3 1 2] | [3 0 1 2] | [2 3 1 0] | [3 2 1 0] | [1 2 0 3] | [2 1 0 3] | [1 3 0 2] | [3 1 0 2] | [2 3 0 1] | [3 2 0 1] |
[1 0 3 2] | [1 0 3 2] | [0 1 3 2] | [2 0 3 1] | [0 2 3 1] | [2 1 3 0] | [1 2 3 0] | [1 0 2 3] | [0 1 2 3] | [3 0 2 1] | [0 3 2 1] | [3 1 2 0] | [1 3 2 0] | [2 0 1 3] | [0 2 1 3] | [3 0 1 2] | [0 3 1 2] | [3 2 1 0] | [2 3 1 0] | [2 1 0 3] | [1 2 0 3] | [3 1 0 2] | [1 3 0 2] | [3 2 0 1] | [2 3 0 1] |
[0 3 1 2] | [0 3 1 2] | [1 3 0 2] | [0 3 2 1] | [2 3 0 1] | [1 3 2 0] | [2 3 1 0] | [0 2 1 3] | [1 2 0 3] | [0 2 3 1] | [3 2 0 1] | [1 2 3 0] | [3 2 1 0] | [0 1 2 3] | [2 1 0 3] | [0 1 3 2] | [3 1 0 2] | [2 1 3 0] | [3 1 2 0] | [1 0 2 3] | [2 0 1 3] | [1 0 3 2] | [3 0 1 2] | [2 0 3 1] | [3 0 2 1] |
[3 0 1 2] | [3 0 1 2] | [3 1 0 2] | [3 0 2 1] | [3 2 0 1] | [3 1 2 0] | [3 2 1 0] | [2 0 1 3] | [2 1 0 3] | [2 0 3 1] | [2 3 0 1] | [2 1 3 0] | [2 3 1 0] | [1 0 2 3] | [1 2 0 3] | [1 0 3 2] | [1 3 0 2] | [1 2 3 0] | [1 3 2 0] | [0 1 2 3] | [0 2 1 3] | [0 1 3 2] | [0 3 1 2] | [0 2 3 1] | [0 3 2 1] |
[1 3 0 2] | [1 3 0 2] | [0 3 1 2] | [2 3 0 1] | [0 3 2 1] | [2 3 1 0] | [1 3 2 0] | [1 2 0 3] | [0 2 1 3] | [3 2 0 1] | [0 2 3 1] | [3 2 1 0] | [1 2 3 0] | [2 1 0 3] | [0 1 2 3] | [3 1 0 2] | [0 1 3 2] | [3 1 2 0] | [2 1 3 0] | [2 0 1 3] | [1 0 2 3] | [3 0 1 2] | [1 0 3 2] | [3 0 2 1] | [2 0 3 1] |
[3 1 0 2] | [3 1 0 2] | [3 0 1 2] | [3 2 0 1] | [3 0 2 1] | [3 2 1 0] | [3 1 2 0] | [2 1 0 3] | [2 0 1 3] | [2 3 0 1] | [2 0 3 1] | [2 3 1 0] | [2 1 3 0] | [1 2 0 3] | [1 0 2 3] | [1 3 0 2] | [1 0 3 2] | [1 3 2 0] | [1 2 3 0] | [0 2 1 3] | [0 1 2 3] | [0 3 1 2] | [0 1 3 2] | [0 3 2 1] | [0 2 3 1] |
[0 2 3 1] | [0 2 3 1] | [1 2 3 0] | [0 1 3 2] | [2 1 3 0] | [1 0 3 2] | [2 0 3 1] | [0 3 2 1] | [1 3 2 0] | [0 1 2 3] | [3 1 2 0] | [1 0 2 3] | [3 0 2 1] | [0 3 1 2] | [2 3 1 0] | [0 2 1 3] | [3 2 1 0] | [2 0 1 3] | [3 0 1 2] | [1 3 0 2] | [2 3 0 1] | [1 2 0 3] | [3 2 0 1] | [2 1 0 3] | [3 1 0 2] |
[2 0 3 1] | [2 0 3 1] | [2 1 3 0] | [1 0 3 2] | [1 2 3 0] | [0 1 3 2] | [0 2 3 1] | [3 0 2 1] | [3 1 2 0] | [1 0 2 3] | [1 3 2 0] | [0 1 2 3] | [0 3 2 1] | [3 0 1 2] | [3 2 1 0] | [2 0 1 3] | [2 3 1 0] | [0 2 1 3] | [0 3 1 2] | [3 1 0 2] | [3 2 0 1] | [2 1 0 3] | [2 3 0 1] | [1 2 0 3] | [1 3 0 2] |
[0 3 2 1] | [0 3 2 1] | [1 3 2 0] | [0 3 1 2] | [2 3 1 0] | [1 3 0 2] | [2 3 0 1] | [0 2 3 1] | [1 2 3 0] | [0 2 1 3] | [3 2 1 0] | [1 2 0 3] | [3 2 0 1] | [0 1 3 2] | [2 1 3 0] | [0 1 2 3] | [3 1 2 0] | [2 1 0 3] | [3 1 0 2] | [1 0 3 2] | [2 0 3 1] | [1 0 2 3] | [3 0 2 1] | [2 0 1 3] | [3 0 1 2] |
[3 0 2 1] | [3 0 2 1] | [3 1 2 0] | [3 0 1 2] | [3 2 1 0] | [3 1 0 2] | [3 2 0 1] | [2 0 3 1] | [2 1 3 0] | [2 0 1 3] | [2 3 1 0] | [2 1 0 3] | [2 3 0 1] | [1 0 3 2] | [1 2 3 0] | [1 0 2 3] | [1 3 2 0] | [1 2 0 3] | [1 3 0 2] | [0 1 3 2] | [0 2 3 1] | [0 1 2 3] | [0 3 2 1] | [0 2 1 3] | [0 3 1 2] |
[2 3 0 1] | [2 3 0 1] | [2 3 1 0] | [1 3 0 2] | [1 3 2 0] | [0 3 1 2] | [0 3 2 1] | [3 2 0 1] | [3 2 1 0] | [1 2 0 3] | [1 2 3 0] | [0 2 1 3] | [0 2 3 1] | [3 1 0 2] | [3 1 2 0] | [2 1 0 3] | [2 1 3 0] | [0 1 2 3] | [0 1 3 2] | [3 0 1 2] | [3 0 2 1] | [2 0 1 3] | [2 0 3 1] | [1 0 2 3] | [1 0 3 2] |
[3 2 0 1] | [3 2 0 1] | [3 2 1 0] | [3 1 0 2] | [3 1 2 0] | [3 0 1 2] | [3 0 2 1] | [2 3 0 1] | [2 3 1 0] | [2 1 0 3] | [2 1 3 0] | [2 0 1 3] | [2 0 3 1] | [1 3 0 2] | [1 3 2 0] | [1 2 0 3] | [1 2 3 0] | [1 0 2 3] | [1 0 3 2] | [0 3 1 2] | [0 3 2 1] | [0 2 1 3] | [0 2 3 1] | [0 1 2 3] | [0 1 3 2] |
[1 2 3 0] | [1 2 3 0] | [0 2 3 1] | [2 1 3 0] | [0 1 3 2] | [2 0 3 1] | [1 0 3 2] | [1 3 2 0] | [0 3 2 1] | [3 1 2 0] | [0 1 2 3] | [3 0 2 1] | [1 0 2 3] | [2 3 1 0] | [0 3 1 2] | [3 2 1 0] | [0 2 1 3] | [3 0 1 2] | [2 0 1 3] | [2 3 0 1] | [1 3 0 2] | [3 2 0 1] | [1 2 0 3] | [3 1 0 2] | [2 1 0 3] |
[2 1 3 0] | [2 1 3 0] | [2 0 3 1] | [1 2 3 0] | [1 0 3 2] | [0 2 3 1] | [0 1 3 2] | [3 1 2 0] | [3 0 2 1] | [1 3 2 0] | [1 0 2 3] | [0 3 2 1] | [0 1 2 3] | [3 2 1 0] | [3 0 1 2] | [2 3 1 0] | [2 0 1 3] | [0 3 1 2] | [0 2 1 3] | [3 2 0 1] | [3 1 0 2] | [2 3 0 1] | [2 1 0 3] | [1 3 0 2] | [1 2 0 3] |
[1 3 2 0] | [1 3 2 0] | [0 3 2 1] | [2 3 1 0] | [0 3 1 2] | [2 3 0 1] | [1 3 0 2] | [1 2 3 0] | [0 2 3 1] | [3 2 1 0] | [0 2 1 3] | [3 2 0 1] | [1 2 0 3] | [2 1 3 0] | [0 1 3 2] | [3 1 2 0] | [0 1 2 3] | [3 1 0 2] | [2 1 0 3] | [2 0 3 1] | [1 0 3 2] | [3 0 2 1] | [1 0 2 3] | [3 0 1 2] | [2 0 1 3] |
[3 1 2 0] | [3 1 2 0] | [3 0 2 1] | [3 2 1 0] | [3 0 1 2] | [3 2 0 1] | [3 1 0 2] | [2 1 3 0] | [2 0 3 1] | [2 3 1 0] | [2 0 1 3] | [2 3 0 1] | [2 1 0 3] | [1 2 3 0] | [1 0 3 2] | [1 3 2 0] | [1 0 2 3] | [1 3 0 2] | [1 2 0 3] | [0 2 3 1] | [0 1 3 2] | [0 3 2 1] | [0 1 2 3] | [0 3 1 2] | [0 2 1 3] |
[2 3 1 0] | [2 3 1 0] | [2 3 0 1] | [1 3 2 0] | [1 3 0 2] | [0 3 2 1] | [0 3 1 2] | [3 2 1 0] | [3 2 0 1] | [1 2 3 0] | [1 2 0 3] | [0 2 3 1] | [0 2 1 3] | [3 1 2 0] | [3 1 0 2] | [2 1 3 0] | [2 1 0 3] | [0 1 3 2] | [0 1 2 3] | [3 0 2 1] | [3 0 1 2] | [2 0 3 1] | [2 0 1 3] | [1 0 3 2] | [1 0 2 3] |
[3 2 1 0] | [3 2 1 0] | [3 2 0 1] | [3 1 2 0] | [3 1 0 2] | [3 0 2 1] | [3 0 1 2] | [2 3 1 0] | [2 3 0 1] | [2 1 3 0] | [2 1 0 3] | [2 0 3 1] | [2 0 1 3] | [1 3 2 0] | [1 3 0 2] | [1 2 3 0] | [1 2 0 3] | [1 0 3 2] | [1 0 2 3] | [0 3 2 1] | [0 3 1 2] | [0 2 3 1] | [0 2 1 3] | [0 1 3 2] | [0 1 2 3] |
And then I'll use ctrl + f
\(24\) times to swap out the lists with letters.
\(id\) | \(a\) | \(b\) | \(c\) | \(d\) | \(e\) | \(f\) | \(g\) | \(h\) | \(i\) | \(j\) | \(k\) | \(l\) | \(m\) | \(n\) | \(o\) | \(p\) | \(q\) | \(r\) | \(s\) | \(t\) | \(u\) | \(v\) | \(w\) | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
\(id\) | \(id\) | \(a\) | \(b\) | \(c\) | \(d\) | \(e\) | \(f\) | \(g\) | \(h\) | \(i\) | \(j\) | \(k\) | \(l\) | \(m\) | \(n\) | \(o\) | \(p\) | \(q\) | \(r\) | \(s\) | \(t\) | \(u\) | \(v\) | \(w\) |
\(a\) | \(a\) | \(id\) | \(c\) | \(b\) | \(e\) | \(d\) | \(g\) | \(f\) | \(i\) | \(h\) | \(k\) | \(j\) | \(m\) | \(l\) | \(o\) | \(n\) | \(q\) | \(p\) | \(s\) | \(r\) | \(u\) | \(t\) | \(w\) | \(v\) |
\(b\) | \(b\) | \(d\) | \(id\) | \(e\) | \(a\) | \(c\) | \(h\) | \(j\) | \(f\) | \(k\) | \(g\) | \(i\) | \(n\) | \(p\) | \(l\) | \(q\) | \(m\) | \(o\) | \(t\) | \(v\) | \(r\) | \(w\) | \(s\) | \(u\) |
\(c\) | \(c\) | \(e\) | \(a\) | \(d\) | \(id\) | \(b\) | \(i\) | \(k\) | \(g\) | \(j\) | \(f\) | \(h\) | \(o\) | \(q\) | \(m\) | \(p\) | \(l\) | \(n\) | \(u\) | \(w\) | \(s\) | \(v\) | \(r\) | \(t\) |
\(d\) | \(d\) | \(b\) | \(e\) | \(id\) | \(c\) | \(a\) | \(j\) | \(h\) | \(k\) | \(f\) | \(i\) | \(g\) | \(p\) | \(n\) | \(q\) | \(l\) | \(o\) | \(m\) | \(v\) | \(t\) | \(w\) | \(r\) | \(u\) | \(s\) |
\(e\) | \(e\) | \(c\) | \(d\) | \(a\) | \(b\) | \(id\) | \(k\) | \(i\) | \(j\) | \(g\) | \(h\) | \(f\) | \(q\) | \(o\) | \(p\) | \(m\) | \(n\) | \(l\) | \(w\) | \(u\) | \(v\) | \(s\) | \(t\) | \(r\) |
\(f\) | \(f\) | \(g\) | \(l\) | \(m\) | \(r\) | \(s\) | \(id\) | \(a\) | \(n\) | \(o\) | \(t\) | \(u\) | \(b\) | \(c\) | \(h\) | \(i\) | \(v\) | \(w\) | \(d\) | \(e\) | \(j\) | \(k\) | \(p\) | \(q\) |
\(g\) | \(g\) | \(f\) | \(m\) | \(l\) | \(s\) | \(r\) | \(a\) | \(id\) | \(o\) | \(n\) | \(u\) | \(t\) | \(c\) | \(b\) | \(i\) | \(h\) | \(w\) | \(v\) | \(e\) | \(d\) | \(k\) | \(j\) | \(q\) | \(p\) |
\(h\) | \(h\) | \(j\) | \(n\) | \(p\) | \(t\) | \(v\) | \(b\) | \(d\) | \(l\) | \(q\) | \(r\) | \(w\) | \(id\) | \(e\) | \(f\) | \(k\) | \(s\) | \(u\) | \(a\) | \(c\) | \(g\) | \(i\) | \(m\) | \(o\) |
\(i\) | \(i\) | \(k\) | \(o\) | \(q\) | \(u\) | \(w\) | \(c\) | \(e\) | \(m\) | \(p\) | \(s\) | \(v\) | \(a\) | \(d\) | \(g\) | \(j\) | \(r\) | \(t\) | \(id\) | \(b\) | \(f\) | \(h\) | \(l\) | \(n\) |
\(j\) | \(j\) | \(h\) | \(p\) | \(n\) | \(v\) | \(t\) | \(d\) | \(b\) | \(q\) | \(l\) | \(w\) | \(r\) | \(e\) | \(id\) | \(k\) | \(f\) | \(u\) | \(s\) | \(c\) | \(a\) | \(i\) | \(g\) | \(o\) | \(m\) |
\(k\) | \(k\) | \(i\) | \(q\) | \(o\) | \(w\) | \(u\) | \(e\) | \(c\) | \(p\) | \(m\) | \(v\) | \(s\) | \(d\) | \(a\) | \(j\) | \(g\) | \(t\) | \(r\) | \(b\) | \(id\) | \(h\) | \(f\) | \(n\) | \(l\) |
\(l\) | \(l\) | \(r\) | \(f\) | \(s\) | \(g\) | \(m\) | \(n\) | \(t\) | \(id\) | \(u\) | \(a\) | \(o\) | \(h\) | \(v\) | \(b\) | \(w\) | \(c\) | \(i\) | \(j\) | \(p\) | \(d\) | \(q\) | \(e\) | \(k\) |
\(m\) | \(m\) | \(s\) | \(g\) | \(r\) | \(f\) | \(l\) | \(o\) | \(u\) | \(a\) | \(t\) | \(id\) | \(n\) | \(i\) | \(w\) | \(c\) | \(v\) | \(b\) | \(h\) | \(k\) | \(q\) | \(e\) | \(p\) | \(d\) | \(j\) |
\(n\) | \(n\) | \(t\) | \(h\) | \(v\) | \(j\) | \(p\) | \(l\) | \(r\) | \(b\) | \(w\) | \(d\) | \(q\) | \(f\) | \(s\) | \(id\) | \(u\) | \(e\) | \(k\) | \(g\) | \(m\) | \(a\) | \(o\) | \(c\) | \(i\) |
\(o\) | \(o\) | \(u\) | \(i\) | \(w\) | \(k\) | \(q\) | \(m\) | \(s\) | \(c\) | \(v\) | \(e\) | \(p\) | \(g\) | \(r\) | \(a\) | \(t\) | \(d\) | \(j\) | \(f\) | \(l\) | \(id\) | \(n\) | \(b\) | \(h\) |
\(p\) | \(p\) | \(v\) | \(j\) | \(t\) | \(h\) | \(n\) | \(q\) | \(w\) | \(d\) | \(r\) | \(b\) | \(l\) | \(k\) | \(u\) | \(e\) | \(s\) | \(id\) | \(f\) | \(i\) | \(o\) | \(c\) | \(m\) | \(a\) | \(g\) |
\(q\) | \(q\) | \(w\) | \(k\) | \(u\) | \(i\) | \(o\) | \(p\) | \(v\) | \(e\) | \(s\) | \(c\) | \(m\) | \(j\) | \(t\) | \(d\) | \(r\) | \(a\) | \(g\) | \(h\) | \(n\) | \(b\) | \(l\) | \(id\) | \(f\) |
\(r\) | \(r\) | \(l\) | \(s\) | \(f\) | \(m\) | \(g\) | \(t\) | \(n\) | \(u\) | \(id\) | \(o\) | \(a\) | \(v\) | \(h\) | \(w\) | \(b\) | \(i\) | \(c\) | \(p\) | \(j\) | \(q\) | \(d\) | \(k\) | \(e\) |
\(s\) | \(s\) | \(m\) | \(r\) | \(g\) | \(l\) | \(f\) | \(u\) | \(o\) | \(t\) | \(a\) | \(n\) | \(id\) | \(w\) | \(i\) | \(v\) | \(c\) | \(h\) | \(b\) | \(q\) | \(k\) | \(p\) | \(e\) | \(j\) | \(d\) |
\(t\) | \(t\) | \(n\) | \(v\) | \(h\) | \(p\) | \(j\) | \(r\) | \(l\) | \(w\) | \(b\) | \(q\) | \(d\) | \(s\) | \(f\) | \(u\) | \(id\) | \(k\) | \(e\) | \(m\) | \(g\) | \(o\) | \(a\) | \(i\) | \(c\) |
\(u\) | \(u\) | \(o\) | \(w\) | \(i\) | \(q\) | \(k\) | \(s\) | \(m\) | \(v\) | \(c\) | \(p\) | \(e\) | \(r\) | \(g\) | \(t\) | \(a\) | \(j\) | \(d\) | \(l\) | \(f\) | \(n\) | \(id\) | \(h\) | \(b\) |
\(v\) | \(v\) | \(p\) | \(t\) | \(j\) | \(n\) | \(h\) | \(w\) | \(q\) | \(r\) | \(d\) | \(l\) | \(b\) | \(u\) | \(k\) | \(s\) | \(e\) | \(f\) | \(id\) | \(o\) | \(i\) | \(m\) | \(c\) | \(g\) | \(a\) |
\(w\) | \(w\) | \(q\) | \(u\) | \(k\) | \(o\) | \(i\) | \(v\) | \(p\) | \(s\) | \(e\) | \(m\) | \(c\) | \(t\) | \(j\) | \(r\) | \(d\) | \(g\) | \(a\) | \(n\) | \(h\) | \(l\) | \(b\) | \(f\) | \(id\) |
\(1922\) Lines.
Notice any patterns? Me neither.
group theory?¶
I'm making this page to celebrate the up upcoming \(2000\)th line.
Here's some great videos about the subject that should get at least some credit: Group theory, abstraction, and the 196,883-dimensional monster, What is Group Theory? — Group Theory Ep. 1 (Yes, I know, there's a part two. But I decided not to watch it because I was worried that if I watched it then I would just copy some of it), This playlist by Another Roof (I've only seen the third part).
I'll start off with the textbook definition of a group:
PS I don't have a group theory textbook.
It is a set or collection of things together with a binary operation (e.g. addition or multiplication because they input two things and output one thing) (this binary operation is usually denoted with a composition circle (this thing: ∘) so that is the notation that I will use) such that...
\(1\). Closure: (this one is sometimes a given) If you have \(a\) and \(b\) in the group, then \(a ∘ b\) is also within the group.
\(2\). Associativity: If you have \(a\), \(b\) and \(c\) in the group, then \((a ∘ b) ∘ c\) is equal to \(a ∘ (b ∘ c)\). For this reason, I will be denoting both as \(a ∘ b ∘ c\)
\(3\). Identity (or neutral depending on where you're from): There must always be a term in the group (call it \(e\)) where if you have \(a\) in the group, then \(a ∘ e\) is equal to \(e ∘ a\) is equal to \(a\).
\(4\). Inverses: If you have \(a\) in the group, then there is also \(a^{-1}\) in the group where \(a ∘ a^{-1}\) is equal to \(e\).
Notice there is no point where I say that the operation is commutative (i.e. \(a ∘ b = b ∘ a\)). If it is commutative, it is also known as an Abelian group.
Also by the way, it is common to notate \(a ∘ a ∘ a ∘ ... ∘ a\) \(n\) times as \(a^n\)
A good way to think about what groups actually are is as symmetries. This is because these four rules are exactly what you would expect rotations and reflections to do with the operation of doing one after the other.
For example, now the inverses rule makes sense because if you rotate clockwise then of course you should also be able to rotate counterclockwise.
An example of a group is the integers with the operation of addition, but not the integers with an operation of multiplication because only \(1\) and \(-1\) have an inverse. The rational numbers under multiplication (excluding \(0\)) also form a group and so can the rationals under addition. Also of course, the real numbers with addition and multiplication (excluding \(0\)).
proofs¶
Proof number one: there's only one identity element.
This proof uses a proof by contradiction strategy. Let's say that there are more than one identity element. So I'm going to choose the first two being \(e_1\) and \(e_2\).
Now, I ask of you, what is \(e_1 ∘ e_2\)? Because on the one hand, it should equal \(e_2\) because \(e_1\) times anything is that thing. But on the other hand, it should equal \(e_1\) because anything times \(e_2\) is that thing.
Thus, because they are both equal to \(e_1 ∘ e_2\), they must themselves be equal. Thus there is only one identity element.
And you can keep going with this logic, doing the same thing with the next one, and the next one, until there is only one left.
QED!
Proof number two: The inverse of the inverse is the original.
Every element has an inverse. So, by definition, the inverse has an inverse.
Let's operate all of these together and see what happens.
This should of course equal \(a\) because \(a^{-1}\) times its inverse should cancel out. But also this should equal \((a^{-1})^{-1}\) because \(a\) and its inverse should cancel out. Thus, because they are both equal to the same thing, they themselves must be equal.
QED!
Proof number three: The inverse can cancel out from either side.
The term \(a^{-1} ∘ a\) can also be simplified. Because \(a\) is equal to \((a^{-1})^{-1}\), I can cancel \(a^{-1}\) with its inverse, resulting in the identity.
QED!
Proof number four: There's only one inverse for a given term.
This one uses the same general strategy as proof number one. Let's assume that there were multiple inverses, denoted \(a^{-1}_1\) and \(a^{-1}_2\). Then of course, \(a ∘ a^{-1}_1 = e\).
Let's see what happens when you multiply both sides on the left by \(a^{-1}_2\).
Then \(a^{-1}_2\) and \(a\) would cancel out resulting in \(a^{-1}_1\) on the left. But on the other side, the identity element cancels out resulting in \(a^{-1}_2\). Thus, because they are both equal to the same thing, they themselves must be equal.
\(2000\) Lines, wow.
QED!
Proof number five: \((a^2)^{-1} = (a^{-1})^2\) and they can both be denoted as \(a^{-2}\).
Because these are both the inverse of \(a^2\) and because of proof number four, they must both be the same.
\(2013\) Lines (I was born in \(2013\)).
QED!
Newtonian physics¶
I tried to just write this down on paper, but I thought it would be much more convenient to write it down over here.
I'm going to try to use Newtonian physics to describe collision of spheres. Each sphere has an index \(n\) with a center \(c_n\), velocity \(v_n\) acceleration \(a_n\), and so on. If there's a property that doesn't have a subscript, then it works with any number.
\(2025\) Lines.
the Fibonacci part of the eigen page, but better¶
Here's the obvious definition of the Fibonacci sequence:
Yes, I know, it normally starts at \(1\) and \(2\), but there's a reason why I'm starting it at \(0\) and \(1\). But we still have the following:
\(2102\) Lines.
I know you're probably thinking: Yeah, that's a pretty nice formula. But still, how do I raise this matrix to the power of \(n\)? Well, to answer this question, we need a little bit of eigenvectors and eigenvalues.
That's enough work for one day.
The eigenvectors of a matrix \(A\) are vectors where if you apply the matrix's operation, that just ends up scaling the vector by some constant \(\lambda\) (a.k.a. the eigenvalue). In written form, it looks like this:
Also, an \(n \times n\) matrix usually has \(n\) eigenvectors*.
Side note!
*What this means is that the scaled copies of a given eigenvector are also eigenvectors and have the same eigenvalue. For this reason, scaled copies of an eigenvector are usually considered as just one.
But, assuming that there are two eigenvectors, I can do something special: I can change the basis.
As you can see, this can be very useful for finding \(\begin{bmatrix} 1 & 1 \\ 1 & 0 \\ \end{bmatrix}^n\). We just need to figure out the eigenvectors, eigenvalues, and how to combine them.
That's enough work for now.
It's been \(3\) hours.
But, now the question becomes: how do I write \(\begin{bmatrix} 1 \\ 0 \\ \end{bmatrix}\) in terms of \(\begin{bmatrix} \varphi \\ 1 \\ \end{bmatrix}\) and \(\begin{bmatrix} \psi \\ 1 \\ \end{bmatrix}\)?
\(2201\) Lines.
\(2222 + 2\) Lines.
an inductive set theoretic proof (unfinished)¶
This proof partially comes from Another Roof.
Also, before I start with how addition is set theoretically defined, I have the notation \(\text{S} (n)\) pronounced "the successor of \(n\)" which means \(n + 1\), which can easily be defined within set theory.
Here's an example:
Now, how do I inductively prove that the following is true?
Obviously I use induction on \(b\). First, base case, set \(b\) to zero.
So, of course, to figure this out, just use induction on \(a\)!
Base case:
This statement is true by definition.
Inductive reasoning:
set theory 3?¶
what is a set?¶
A set is a collection of things...
... Okay, fine I'll be more specific. A set can contain a certain amount of objects, so for example, I could have the set containing this mouse that I'm holding and this chair that I'm sitting on.
So a set kind of means a collection or a group, but group theory is a whole other page.
Note: the elements (that is, the things part of the set, a.k.a. the members of it) are not ordered within a set, and a set having the same element twice doesn't change the set. When you ask the question "is this thing part of your set?" I can only answer with a yes or a no.
In other words, sets are only defined by their elements.
\(2300\) Lines.
As for some notation, the set of all even numbers would be written as \(\{ 2, 4, 6, 8 ... \}\), \(\{\) numbers \(n: n\) is even \(\}\), or \(\{ x: x\) is a number and is even \(\}\). (Pronounced "the set containing \(2\), \(4\), \(6\), \(8\) and so on", "the set of all numbers \(n\) where \(n\) is even", and "the set of all \(x\) where \(x\) is a number, and furthermore, \(x\) is even" respectively.)
Also, the symbol for "\(x\) is a member of \(S\)" where \(S\) is a set is \(x \in S\).
But sets can contain more than just things, they can contain other sets.
But this leads to a paradox: would \(\{S:\) not \(S \in S \}\) (the set containing only the sets that don't contain themselves) contain itself? (P.S. there's a great video on this.)
Clearly there's a problem with this. Let's take a step back.
what is a set, really?¶
Idk. I'd rather talk about...
ordinals¶
Wait a minute! I can just do what I've already done, but in the exponents.
\(2400\) Lines.