Difference between revisions of "Equivalent conditions for a linear map between two normed spaces to be continuous everywhere/2 implies 3"
From Maths
(Created page with "<noinclude> {{Stub page|See page 154 in Maurin's Analysis, although the proof isn't hard}} ==Statement== Given two normed spaces {{M|(X,\Vert\cdot\Vert_X)}} a...") |
m |
||
Line 1: | Line 1: | ||
<noinclude> | <noinclude> | ||
− | |||
==Statement== | ==Statement== | ||
Given two [[normed space|normed spaces]] {{M|(X,\Vert\cdot\Vert_X)}} and {{M|(Y,\Vert\cdot\Vert_Y)}} and also a [[linear map]] {{M|L:X\rightarrow Y}} then we have: | Given two [[normed space|normed spaces]] {{M|(X,\Vert\cdot\Vert_X)}} and {{M|(Y,\Vert\cdot\Vert_Y)}} and also a [[linear map]] {{M|L:X\rightarrow Y}} then we have: | ||
− | * If {{M|L} | + | * If {{M|L}} is continuous at a point (say {{M|p\in X}}) '''then''' |
* {{M|L}} is a [[bounded linear map]], that is to say: | * {{M|L}} is a [[bounded linear map]], that is to say: | ||
** {{M|\exists A\ge 0\ \forall x\in X[\Vert L(x)\Vert_Y \le A\Vert x\Vert_X]}} | ** {{M|\exists A\ge 0\ \forall x\in X[\Vert L(x)\Vert_Y \le A\Vert x\Vert_X]}} | ||
==Proof== | ==Proof== | ||
</noinclude> | </noinclude> | ||
− | + | The key to this proof is exploiting the linearity of {{M|L}}. As will be explained in the blue box. | |
+ | * Suppose that {{M|L:X\rightarrow Y}} is continuous at {{M|p\in X}}. Then: | ||
+ | ** {{M|1=\forall\epsilon>0\ \exists\delta>0[\Vert x-p\Vert_X<\delta\implies\Vert L(x-p)\Vert_Y<\epsilon]}} (note that {{M|1=L(x-p)=L(x)-L(p)}} due to [[linear map|linearity]] of {{M|L}}) | ||
+ | * Define {{M|1=u:=x-p}}, then: | ||
+ | ** {{M|1=\forall\epsilon>0\ \exists\delta>0[\Vert u\Vert_X<\delta\implies\Vert Lu\Vert_Y<\epsilon]}} (writing {{M|1=Lu:=L(u)}} as is common for [[linear map|linear maps]]) | ||
+ | {{Begin Blue Notebox}} | ||
+ | We now know we may assume that {{M|1=\forall\epsilon>0\ \exists\delta>0[\Vert u\Vert_X<\delta\implies\Vert Lu\Vert_Y<\epsilon]}} - see this box for a guide on how to use this information and how I constructed the rest of the proof. The remainder of the proof follows this box. | ||
+ | {{Begin Blue Notebox Content}} | ||
+ | The key to this proof is in the norm structure and the linearity of {{M|L}}. | ||
+ | * Pick (fix) some {{M|\epsilon>0}} we now know: | ||
+ | ** {{M|\exists\delta>0}} such that if we have {{M|\Vert x\Vert_X<\delta}} then ''we have'' {{M|\Vert Lx\Vert<\epsilon}} | ||
+ | In the proof we will have to show at some point that: {{M|\forall x\in X[\Vert Lx\Vert_Y\le A\Vert x\Vert_X]}} this means: | ||
+ | * At some point we'll be given an arbitrary {{M|x\in X}} | ||
+ | However we have a {{M|\delta}} such that {{M|\Vert u\Vert_X<\delta\implies\Vert Lu\Vert_Y<\epsilon}} | ||
+ | * '''All we have to do is make {{M|\Vert x\Vert_X}} so small that it is less than {{M|\delta}}''' - we can do this using scalar multiplication. | ||
+ | * ''Note that if {{M|1=x=0}} the result is trivial, so assume that arbitrary {{M|x}} is {{M|\ne 0}}'' | ||
+ | * With this in mind the task is clear: we need to multiply {{M|x}} by something such that the actual vector part has {{M|\Vert\cdot\Vert_X<\delta}} | ||
+ | ** Then we can say (supposing {{M|1=x=\alpha p}} for some positive {{M|\alpha}} and {{M|\Vert p\Vert_X<\delta}}) that {{M|1=\Vert L(\alpha p)\Vert_Y=\alpha\Vert Lp\Vert_Y}} | ||
+ | *** But {{M|\Vert p\Vert_X<\delta\implies\Vert Lp\Vert_Y<\epsilon}} | ||
+ | ** So {{M|1=\Vert L(\alpha p)\Vert_Y=\alpha\Vert Lp\Vert_Y<\alpha\epsilon}}, if we can get a {{M|\Vert x\Vert_X}} involved in {{M|\alpha}} the result will follow. | ||
+ | '''Getting an arbitrary {{M|\Vert x\Vert_X}} to have magnitude {{M|<\delta}}''' | ||
+ | # Lets [[normalise (vector)|normalise]] {{M|x}} and then multiply it by the magnitude again (so as to do nothing) | ||
+ | #* So notice {{MM|1=x=\overbrace{\Vert x\Vert_X}^\text{scalar part}\cdot\overbrace{\frac{1}{\Vert x\Vert_X}x}^\text{vector part} }}, | ||
+ | # Now the vector part has magnitude {{M|1}} we can get it to within {{M|\delta}} by multiplying it by {{MM|1=\frac{\delta}{2} }} | ||
+ | #* So {{MM|1=x=\overbrace{\Vert x\Vert_X\cdot\frac{2}{\delta} }^\text{scalar part}\cdot\overbrace{\frac{\delta}{2\Vert x\Vert_X}x}^\text{vector part} }}. | ||
+ | Now we have {{MM|1=\left\Vert\frac{\delta}{2\Vert x\Vert_X}x\right\Vert_Y<\delta}} which {{MM|1=\implies \left\Vert L\left(\frac{\delta}{2\Vert x\Vert_X}x\right)\right\Vert_Y<\epsilon}} | ||
+ | * Thus {{MM|1=\Vert Lx\Vert_Y=\frac{2\Vert x\Vert_X}{\delta}\cdot\left\Vert L\left(\frac{\delta}{2\Vert x\Vert_X}x\right)\right\Vert_Y<\frac{2\Vert x\Vert_X}{\delta}\cdot\epsilon=\frac{2\epsilon}{\delta}\Vert x\Vert_X}} | ||
+ | '' But {{M|\epsilon}} was fixed, and so was the {{M|\delta}} we know to exist based off of this, so set {{M|1=A=\frac{2\epsilon}{\delta} }} after fixing them and the result will follow!'' | ||
+ | {{End Blue Notebox Content}}{{End Blue Notebox}} | ||
+ | * Fix some arbitrary {{M|\epsilon>0}} (it doesn't matter what) | ||
+ | ** We know there {{M|\exists\delta>0[\Vert x\Vert_X<\delta\implies\Vert Lu\Vert_Y<\epsilon]}} - take such a {{M|\delta}} (which we know to exist by hypothesis) and fix it also. | ||
+ | * Define {{M|1=A:=\frac{2\epsilon}{\delta} }} (if you are unsure of where this came from, see the blue box) | ||
+ | * Let {{M|x\in A}} be given (this is the {{M|\forall x\in X}} part of our proof, we have just claimed an {{M|A}} exists on the above line) | ||
+ | ** If {{M|1=x=0}} then | ||
+ | *** Trivially the result is true, as {{M|1=L(0_X)=0_Y}}, and {{M|1=\Vert 0_Y\Vert_Y=0}} by definition, {{M|1=A\Vert x\Vert_X=0}} as {{M|1=\Vert 0_X\Vert_X=0}} so we have {{M|0\le 0}} which is true. (This is more workings than the line is worth) | ||
+ | ** Otherwise ({{M|x\ne 0}}) | ||
+ | *** Notice that {{MM|1=x=\frac{2\Vert x\Vert_X}{\delta}\cdot\frac{\delta}{2\Vert x\Vert_X}x}} and {{MM|\left\Vert \frac{\delta}{2\Vert x\Vert_X}x\right\Vert_X<\delta}} | ||
+ | **** and {{MM|1=\left\Vert \frac{\delta}{2\Vert x\Vert_X}x\right\Vert_X<\delta\implies\left\Vert L\left(\frac{\delta}{2\Vert x\Vert_X}x\right)\right\Vert_Y<\epsilon}} | ||
+ | ** Thus we see {{MM|1=\Vert Lx\Vert_Y=\left\Vert L\left(\frac{2\Vert x\Vert_X}{\delta}\cdot\frac{\delta}{2\Vert x\Vert_X}x\right)\right\Vert_Y=\frac{2\Vert x\Vert_X}{\delta}\cdot\overbrace{\left\Vert L\left(\frac{\delta}{2\Vert x\Vert_X}x\right)\right\Vert_Y}^{\text{remember this is }<\epsilon} }} {{MM|1=<\frac{2\Vert x\Vert_X}{\delta}\cdot\epsilon=\frac{2\epsilon}{\delta}\Vert x\Vert_X=A\Vert x\Vert_X}} | ||
+ | *** Shortening the workings this states that: {{MM|1=\Vert Lx\Vert_Y< A\Vert x\Vert_X}} | ||
+ | * So if {{M|1=x=0}} we have equality, otherwise {{MM|1=\Vert Lx\Vert_Y< A\Vert x\Vert_X}} | ||
+ | In either case, it is true that {{MM|1=\Vert Lx\Vert_Y\le A\Vert x\Vert_X}} | ||
+ | <br/> | ||
+ | ''This completes the proof'' | ||
<noinclude> | <noinclude> | ||
{{Theorem Of|Linear Algebra|Functional Analysis}} | {{Theorem Of|Linear Algebra|Functional Analysis}} | ||
</noinclude> | </noinclude> |
Latest revision as of 00:37, 28 February 2016
Statement
Given two normed spaces (X,∥⋅∥X) and (Y,\Vert\cdot\Vert_Y) and also a linear map L:X\rightarrow Y then we have:
- If L is continuous at a point (say p\in X) then
- L is a bounded linear map, that is to say:
- \exists A\ge 0\ \forall x\in X[\Vert L(x)\Vert_Y \le A\Vert x\Vert_X]
Proof
The key to this proof is exploiting the linearity of L. As will be explained in the blue box.
- Suppose that L:X\rightarrow Y is continuous at p\in X. Then:
- \forall\epsilon>0\ \exists\delta>0[\Vert x-p\Vert_X<\delta\implies\Vert L(x-p)\Vert_Y<\epsilon] (note that L(x-p)=L(x)-L(p) due to linearity of L)
- Define u:=x-p, then:
- \forall\epsilon>0\ \exists\delta>0[\Vert u\Vert_X<\delta\implies\Vert Lu\Vert_Y<\epsilon] (writing Lu:=L(u) as is common for linear maps)
[Expand]
We now know we may assume that \forall\epsilon>0\ \exists\delta>0[\Vert u\Vert_X<\delta\implies\Vert Lu\Vert_Y<\epsilon] - see this box for a guide on how to use this information and how I constructed the rest of the proof. The remainder of the proof follows this box.
- Fix some arbitrary \epsilon>0 (it doesn't matter what)
- We know there \exists\delta>0[\Vert x\Vert_X<\delta\implies\Vert Lu\Vert_Y<\epsilon] - take such a \delta (which we know to exist by hypothesis) and fix it also.
- Define A:=\frac{2\epsilon}{\delta} (if you are unsure of where this came from, see the blue box)
- Let x\in A be given (this is the \forall x\in X part of our proof, we have just claimed an A exists on the above line)
- If x=0 then
- Trivially the result is true, as L(0_X)=0_Y, and \Vert 0_Y\Vert_Y=0 by definition, A\Vert x\Vert_X=0 as \Vert 0_X\Vert_X=0 so we have 0\le 0 which is true. (This is more workings than the line is worth)
- Otherwise (x\ne 0)
- Notice that x=\frac{2\Vert x\Vert_X}{\delta}\cdot\frac{\delta}{2\Vert x\Vert_X}x and \left\Vert \frac{\delta}{2\Vert x\Vert_X}x\right\Vert_X<\delta
- and \left\Vert \frac{\delta}{2\Vert x\Vert_X}x\right\Vert_X<\delta\implies\left\Vert L\left(\frac{\delta}{2\Vert x\Vert_X}x\right)\right\Vert_Y<\epsilon
- Notice that x=\frac{2\Vert x\Vert_X}{\delta}\cdot\frac{\delta}{2\Vert x\Vert_X}x and \left\Vert \frac{\delta}{2\Vert x\Vert_X}x\right\Vert_X<\delta
- Thus we see \Vert Lx\Vert_Y=\left\Vert L\left(\frac{2\Vert x\Vert_X}{\delta}\cdot\frac{\delta}{2\Vert x\Vert_X}x\right)\right\Vert_Y=\frac{2\Vert x\Vert_X}{\delta}\cdot\overbrace{\left\Vert L\left(\frac{\delta}{2\Vert x\Vert_X}x\right)\right\Vert_Y}^{\text{remember this is }<\epsilon} <\frac{2\Vert x\Vert_X}{\delta}\cdot\epsilon=\frac{2\epsilon}{\delta}\Vert x\Vert_X=A\Vert x\Vert_X
- Shortening the workings this states that: \Vert Lx\Vert_Y< A\Vert x\Vert_X
- If x=0 then
- So if x=0 we have equality, otherwise \Vert Lx\Vert_Y< A\Vert x\Vert_X
In either case, it is true that \Vert Lx\Vert_Y\le A\Vert x\Vert_X
This completes the proof