**Lemma:** The ring is a UFD.

**Proof.** Define . The identity

proves that

So is a multiplicative function. We claim that is a Euclidean norm on . Let with . We need to show that there exist elements such that where . Now , so we can write where . Let be the closest integers to and , respectively. Let . Since and , it follows that

Set . Then and

as desired.

Let’s turn our attention in solving the equation in integers and . Suppose satisfy . First, cannot be even because then which is impossible. So must be odd, and as a result is also odd. Now we have the factorization in . We first show that and are relatively prime. Suppose divides both and . Then divides their difference . If is not a unit, then would divide . This is because is prime: indeed, if , then from , we see that or ; so either or is a unit; i.e is irreducible and hence prime because we are in a UFD. But if divides , then divides and so divides . But this means , i.e. and . But then would be even, contradiction. We conclude that is a unit.

So and are relatively prime. So if is any prime that divides , then divides . So divides either or . Hence, must be of the form for some where is a unit. Since the only units in are (units correspond to ) which are already cubes, we can absorb them into term. So write for some . Then we get

As a result, and . We deduce that . This gives . Clearly doesn’t give any solution in integers; so and consequently, . So . Now gives . Therefore, are the only solutions in integers.

]]>

Let’s recall that: A space is compact if every open cover of has a finite subcover. And is sequentially compact if every sequence in has a convergent subsequence.

When is a metric space, these two conditions are equivalent (which is already interesting!). But I want to talk about a really cool example of a topological space which is compact but not sequentially compact.

Let where is the power set of (i.e. is the set of all subsets of the positive integers). One can think of the space in two ways: The formal way is to think of as the set of all functions . Or, we can simply say is the space of all infinite tuples, all of whose entries are coming from . Of course, the infinite tuple will have uncountably many coordinates (namely the cardinality of ).

In any case, the space is compact by Tychonoff’s theorem. However, let’s show that is not sequentially compact. For this, we need to construct some sequence which does not have a convergent subsequence. The idea comes from the sequence in which does not have a convergent subsequence. In order to construct the sequence in , we need to specify -many coordinates for each . Essentially we want to fill out a grid of dimension :

The idea is to fill out the grid column by column! (It would seem more natural to fill the table row by row, since filling out -th row exactly corresponds to specifying an element of the sequence). Recall that columns of the grid is indexed by elements of . For every element , fill the -th column by inserting into the rows indexed by respectively, where , and entering (or anything arbitrary) to all other entries of this column. Let’s give an example. For example, if , then we would insert to the entries and insert to the entries , and insert for for all . Of course, could be infinite as well, but hopefully the idea is clear.

Now that the grid is filled, the elements of the sequence are ready. They are simply the rows of the grid. The claim is that the constructed sequence has no convergent subsequence. Indeed, assume to the contrary, that has some subsequence that converges. We take and look at the -th column of the grid. What do we see? We see across the rows indexed by . So this shows cannot possibly converge, because if it did, it would have to converge componentwise, but -th component of alternates between and . This is a contradiction, and the claim is proved.

So is an example of compact space which is not sequentially compact.

**Remark.** Of course, the same idea goes on to show that is compact but not sequentially compact. The key is to write down a bijection (injection would suffice) and work from there…

]]>

The Chapter 4 of the book is titled “Entertainment”. It includes applications of the tools such as group actions and Sylow Theorems to prove a variety of results for both finite and infinite groups. For the finite case, the focus is partly on teaching methods to prove a statement of the form “Every group of order is not simple.” These are fun problems in my experience. I plan to do a blog post in the future outlining some general tricks concerning this topic. (The algebra qualifying exams tend to ask questions of this type).

For now, let us consider the groups of order . Can we prove that every group of order is not simple? Yes:

**Easy way:** Note that the number of -Sylow subgroups is divisor of , and is . Hence, there is only one -Sylow subgroup, which must therefore be normal.

In their book, Smith and Tabachnikova uses different proof, which I will call the “hard way”. Of course, the authors are fully aware of the simpler proof above, and in fact they hint in parenthetical remark that “but the alert reader will spot a faster proof!” But I quite like the approach they take. It seems like a neat trick to try on other problems.

**Hard (but still elegant) way:** Let be a group of order . Let’s consider -Sylow subgroups of . By Sylow Theorems, the number of -Sylow subgroups is either or . If it is one, then the -Sylow subgroup is normal and is not simple, as claimed. So let’s assume that there are seven -Sylow subgroups. Take two distinct -Sylow subgroups and . Let . Clearly, and so by Lagrange’s Theorem or . But if , then the subset would contain distinct elements which is absurd since the group has only elements in total! Thus, . Now we use the fact that in a -group, any proper subgroup is strictly contained in its normalizer (it is a nice exercise; the proof by induction on the order of the group works smoothly. Alternatively, we have the more general fact that in any finite group , if is a -group inside , then , and this is proved by considering the fixed points for the left-multiplication action of on the set of left cosets of ). It follows that and . Therefore, and . Since is maximal in , and , it follows that and the subgroup is normal in .

]]>

It is enough to show that the closed interval is uncountable. Note that has the following three properties:

(Definition: If is a topological space, then is called an *isolated point* of if the singleton is an open set in ). And now the general theorem follows:

**Theorem. **If is a non-empty compact Hausdorff topological space having no isolated points, then is uncountable.

**Proof. **(Following Munkres). The proof proceeds in two steps. The first step will use Hausdorff condition and the non-existence of isolated points. The second step will use compactness (via the equivalent formulation stating that if any family of closed sets in satisfies the finite intersection property, then ).

**Step 1: **We show that if and is any non-empty open set in then there exists a subset such that . First we choose such that (this is possible: if , then let be any element of ; if then because is not open but is open, hence we can pick different from ). Next using Hausdorff condition, we can find disjoint neighbourhoods and of and respectively. It follows that satisfies (Why? If , then every neighbourhood of would intersect . But is a neighbourhood of , and , and so , contradiction).

**Step 2. **We will show that any function is not surjective. This will establish uncountability of . Let . Applying Step 1 to and , we get a subset such that . We inductively define as follows. If is already determined, we apply Step 1 to and to get a subset such that . So we have a nested sequence of non-empty closed sets:

Since is compact, the intersection is not empty. If , then for each (otherwise, which is a contradiction). Thus, is not surjective.

]]>

However, one can prove the following version of Goldbach’s Conjecture for the ring :

**Proposition.** Let be a positive integer. Given any even integer there exist primes and such that .

The proof will, in fact, show that infinitely many such primes and exist. The required ingredients are Dirichlet’s Theorem on Arithmetic Progressions, and the following lemma due to Schinzel (1958):

**Lemma. **Given positive integers and , there exists an integer such that .

**Proof of Lemma. **The case is trivial, so assume and write its prime factorization. We have where are distinct primes, are some positive integers. For each , we will show that there exists an integer such that . If , we can let . If is an odd prime, cannot divide both and (otherwise, it would divide their difference which is ); so if let , and if let . Now, by Chinese Remainder Theorem, one can find an integer such that for each . It is evident that . (If it is not evident, assume . Then some prime divisor of would divide . But it doesn’t.)

**Proof of Proposition. **From the lemma, we have for some positive integer . By Dirichlet’s Theorem, there exist primes and of the form and . Thus, which proves , as desired. Indeed, Dirichlet’s Theorem guarantees the existence of infinitely many such primes and .

]]>

Let be a linear transformation on a finite-dimensional vector space (over ). In other words, satisfies , and for every scalar . It is clear that -fold composition is also a linear map. So we can ask when satisfies a polynomial identity. In other words, we are interested in finding some coefficients such that where is the identity map, and is the zero map (the map that is identically zero). Since the set of all linear transformations on forms a dimensional vector space, where , it follows that the maps are linearly dependent, so there exists a linear relation between the powers of . As a result, we see that satisfies a polynomial of degree at most .

Cayley-Hamilton Theorem shows that we can do significantly better. Recall the characteristic polynomial . It is clear that is a polynomial of degree . Cayley-Hamilton Theorem states that . I am not going to prove this theorem here. A proof can be found located in wikipedia. I am also going to assume that the reader is familiar with the notion that a linear transformation on a finite-dimensional vector space can be identified with a matrix.

Let me explain why I like this theorem. It is because the theorem is true when the field is replaced by any commutative ring! In other words, if you consider a matrix with entries from some commutative ring , then define (say, using Laplace expansion) as usual. We still have the conclusion . Of course, it doesn’t make sense to talk about vector spaces over arbitrary rings (for this purpose, we use related objects called *modules*), but the Cayley-Hamilton Theorem still holds in the sense I described. This is closely related with the so-called “determinant trick” (cf. Nakayama’s Lemma) which is a very convenient tool in commutative algebra.

Here is something cool I learned today (from Professor Lior Silberman). We can deduce Cayley-Hamilton Theorem for commutative rings **using **the result of Cayley-Hamilton Theorem for fields. This could at first seem like a hopeless task, because we are trying to prove something stronger. Here is how the proof goes. Cayley-Hamilton is true for the field of rational numbers . Since is a subring of , Cayley-Hamilton holds for . By this, I just mean that the conclusion holds whenever is a matrix with entries from . Now consider an arbitrary commutative ring . Let be a matrix with entries from , i.e. . We view as an array indeterminates. Then is a matrix with polynomials (each of them being multivariate in variables with coefficients in ). Now, when these indeterminates are replaced by integers, we get that vanishes (precisely because Cayley-Hamilton is true for ). Thus, vanishes on all of . But each entry of is a polynomial of degree , and no non-zero multivariate polynomial with coefficients in can vanish on all of . It follows that each entry of is the zero polynomial. Consequently, .

It is natural to wonder why plays an important role in the proof above. One of the reasons for this phenomena is that is the initial object in the category of rings.

]]>

Does there exist continuous bijection from to ? (1)

Well, it *feels* like the answer should be “no”. At least if the question was

Does there exist continuous bijection from to ? (2)

the answer would be a trivial “no”. Continuous image of compact set is compact; in particular, is compact, and is not compact, so continuous bijection from to cannot exist. So it is natural to wonder how questions (1) and (2) are related. If is a continuous bijection, then is well-defined function. But, if **in addition**, is known to be continuous, then once again we have a continuous bijection from which we know is impossible. For general reference:

Definition.When is a continuous bijection, and is continuous, then we say that is homeomorphism from to .

Thus, being homeomorphic is a stronger property than being continuous and bijective. Roughly speaking, homeomorphisms preserve all intrinsic topological properties (e.g. compactness), just like how group homomorphisms preserve the group structure. To naively answer question (1), it would be very good to know when continuous bijections are homeomorphisms. Because *if* continuous bijections were always homeomorphisms, then the answer to question (1) is definitive “no” by what we explained above. Unfortunately, that’s not always the case. But not all is lost. We have the following sufficient condition:

Theorem.If is continuous bijection, where is compact and is Hausdorff, then is a homeomorphism.

But the domain of is , so we can’t apply the above theorem. What a pity! After much teasing, I think it would only be fair to present the complete answer:

**Solution to Question (1). **Suppose that is a continuous bijection. Then there exists a unique such that . Let be so small that . So the intervals and get mapped under to intervals and , respectively for some . But then every value in would be achieved at least twice by , contradiction.

**Acknowledgement.** I learnt the above proof from t.b.’s answer here:

http://math.stackexchange.com/questions/42308/continuous-bijection-from-0-1-to-0-1

Other proofs given in that thread are also very nice.

]]>

In “Undergraduate Commutative Algebra” by Miles Reid, the following proposition is proved (page 53), from which some important properties of Noetherian modules can be derived.

**Proposition.** Let be a short exact sequence of -modules. Then,

I have written up proof of this here. (I have filled in some details, where Miles have left to the reader).

Here are some consequences:

(1) If are Noetherian modules (for ), then is Noetherian.

(2) If is a Noetherian ring, then -module is Noetherian if and only if it is finitely generated -module.

(3) If is a Noetherian ring, is a finitely generated -module, then any subdmoule is again finitely-generated -module.

**Proof.** (1) We recall that direct sum can be realized as an exact sequence , where , and . Applying the Proposition, we obtain that is Noetherian module. By simple induction, this is extended to direct sum of any finite number of Noetherian modules.

(2) If -module is finitely generated, then we have a surjection for some positive integer , where is a free module of rank . In other words, we have an exact sequence . From (1), we know is Noetherian. By the one of the directions () of the Proposition, we obtain that is also Noetherian. Conversely, if is a Noetherian -module, it is clear that is finitely generated -module, for otherwise we would obtain ascending chain of submodules

that does not terminate. Here is the submodule generated by elements .

(3) Since is finitely generated -module, we get from (2) that is a Noetherian -module. It is clear that any submodule of a Noetherian module is again Noetherian module. So a submodule is Noetherian. Applying (2) again, we get that is finitely-generated -module, as desired.

]]>

**Exercise 6 (Chapter 1)**. A ring is such that every ideal not contained in the nilradical contains non-zero idempotent (that is, an element such that ). Prove that the nilradical and Jacobson radical of are equal.

Before proceeding to solution, let’s define what each of these terms mean. Even before that, let me note that throughout the word “ring” will mean commutative ring with 1 (multiplicative identity). This is the assumption made in the beginning of A&M. The **nilradical** of a ring is the set of all nilpotent elements of (an element is called **nilpotent** if for some positive integer .) Proposition 1.8 in A&M gives the following characterization:

**Proposition 1.8.** The nilradical of is the intersection of all the prime ideals of .

On the other hand, **Jacobson radical** of a ring is defined to be the intersection of all maximal ideals of . Proposition 1.9 in A&M gives the following characterization:

**Proposition 1.9.** is a unit in for all .

Using these two results, we can proceed to the solution of the exercise above.

**Solution of Exercise 6 (Chapter 1)**. Since every maximal ideal is a prime ideal, and nilradical is the intersection of all prime ideals, it is clear that nilradical is contained in the intersection of all maximal ideals, that is, nilradical is contained in Jacobson radical. This proves one of the inclusions (note that this inclusion, namely holds generally in any ring). Now we need to show that Jacobson radical is subset of the nilradical. Let . We want to show that . Assume, to the contrary, . By assumption, the ideal generated by , namely the principal ideal is not contained in the nilradical. By hypothesis, this means that there exists a nonzero idempotent element in . Hence, there exists such that . Now since , using Proposition 1.9 we get that is a unit. But . Since is a unit, we can multiply this equation from the left by its inverse to get , or equivalently, . This contradicts , and so we conclude that , as desired. This completes the proof that the nilradical and Jacobson radical of ring coincide.

]]>

Showing that is an ideal in is routine and I shall omit the details. To show that is a prime ideal in , one way to proceed is to apply the definition of a prime ideal: if , then we want to show that and . I think this is the kind of approach that seems natural for a beginner in ring theory (like me). But recently, reading “Introduction to Commutative Algebra” by Atiyah & Macdonald, I learnt the following solution, which seems more insightful. The idea is as follows:

To show that is a prime ideal in , it suffices to prove that is an integral domain. We claim that is isomorphic to a subring of , and this would finish the proof immediately for the following reason: We know is an integral domain (because is a prime ideal in ) and every subring of an integral domain is again an integral domain. So we need to exhibit an injective ring homomorphism . This map is explicitly given by

For every , we have

and

which shows that is a ring homomorphism. We will now show that is injective. Assume for some . Then, by definition,

or equivalently,

which implies that so that . Thus, is injective. We have proved that is an injective ring homomorphism. As a result, is isomorphic to a subring of , and so is an integral domain, proving that is prime ideal in .

]]>