How did mathematicians decide on the axioms of linear algebra

General Tech Learning Aids/Tools . 2 years ago

  0   2   0   0   0 tuteeHUB earn credit +10 pts

5 Star Rating 5 Rating

Posted on 16 Aug 2022, this text provides information on Learning Aids/Tools related to General Tech. Please note that while accuracy is prioritized, the data presented might not be entirely correct or up-to-date. This information is offered for general knowledge and informational purposes only, and should not be considered as a substitute for professional advice.

Take Quiz To Earn Credits!

Turn Your Knowledge into Earnings.

tuteehub_quiz

Write Your Comments or Explanations to Help Others



Tuteehub forum answer Answers (2)


profilepic.png
manpreet Tuteehub forum best answer Best Answer 2 years ago

So we vector spaces, linear transformations, inner products etc all have their own axioms that they have to satisfy in order to be considered to be what they are.

But how did we come to decide to include those axioms and not include others?

For example, why does this rule hold in inner product spaces cu,v=cu,vc⟨u,v⟩=⟨cu,v⟩, when my intuition says that it should be cu,cv⟨cu,cv⟩?

And how did we decide that it was scalar multiplication and additive were sufficient criteria for something to be a linear map?

0 views   0 shares
profilepic.png
manpreet 2 years ago

 

Linear algebra is one of the first "abstractions" that you encounter in mathematics that is not very well motivated by experience. (Well... there are numerals, which are a pretty tricky abstraction as well, but most of us don't recall learning those.)

It helps to have the backstory.

Mathematicians studied geometry and simple transformations of the plane like "rotation" and "translation" (moving everything to the right by 3 inches, or up by 7 inches, or northeast by 3.2 inches, etc.) as far back as Euclid, and at some point, they noticed that you could do things like "do one translation after another", and the result was the same as if you'd done some different translation. And even if you did the first two translations in a different order, the resulting translation was still the same. So pretty soon they said "Hey, these translations are behaving a little like numbers do when we add them together: the order we add them in doesn't matter, and there's even something that behaves the way zero does: the "don't move at all" transformation, when composed with any other translation, gives that other translation."

So you have two different sets of things: ordinary numbers, and "translations of the plane", and for both, there's a way of combining ("+" for numbers, "composition of transformations" for translations), and each of these combining rules has an identity element ("0" for addition, "don't move at all" for translation), and for both operations ("+" and "compose"), the order of operations doesn't matter, and you start to realize something: if I proved something about numbers using only the notion of addition, and the fact that there's an identity, and that addition is commutative, I could just replace a bunch of words and I'd have a proof about the set of all translations of the plane!

And the next thing you know, you're starting to realize that other things have these kinds of shared properties as well, so you say "I'm going to give a name to sets of things like that: I'll call them 'groups'." (Later, you realize that the commutativity of addition is kind of special, and you really want to talk about other operations as well, so you enlarge your notion of "group" and instead call these things "Abelian groups," after Abel, the guy who did a lot of the early work on them.)

The same thing happened with linear algebra. There are some sets of things that have certain properties, and someone noticed that they ALL had the same properties, and said "let's name that kind of collection". It wasn't a pretty development -- the early history of vectors was complicated by people wanting to have a way to multiply vectors in analogy with multiplying real numbers or complex numbers, and it took a long time for folks to realize that having a "multiplication" was nice, but not essential, and that even for collections that didn't have multiplication, there were still a ton of important results.

In a way, though, the most interesting thing was not the sets themselves -- the "vector spaces", but rather, the class of transformations that preserve the properties of a vector space. These are called "linear transformations", and they are a generalization of the transformations you learn about in Euclid.

Why are these so important? One reason for their historical importance is that for a function from nn-space to kk-space, the derivative, evaluated at some point of nn-space, is a linear transformation. In short: something we cared a lot about -- derivatives -- turns out to be very closely tied to linear transformations.

For a function f:RRf:R→R, the derivative f(a)f′(a) is usually regarded as "just a number". But consider for a moment

f(x)=xf(x)=12xf(100)=10f(100)=120f(x)=xf′(x)=12xf(100)=10f′(100)=120
Suppose you wanted to compute the square root of a number that's a little way from 100, say, 102. We could say "we moved 2 units in the domain; how far do we have to move away from 10 (i.e., in the codomain)?" The answer is that the square root of 102102 is (very close to) the square root of 100, displaced by 21
0 views   0 shares

No matter what stage you're at in your education or career, TuteeHub will help you reach the next level that you're aiming for. Simply,Choose a subject/topic and get started in self-paced practice sessions to improve your knowledge and scores.

tuteehub community

Join Our Community Today

Ready to take your education and career to the next level? Register today and join our growing community of learners and professionals.

tuteehub community