I am currently a Teaching Assistant for my advisor for a class called Introduction to Theoretical Computer Science, which essentially covers a standard curriculum on automata theory. We just recently covered the Pumping Lemma for regular languages:
Lemma 1 (Pumping Lemma for Regular Languages) Let be a regular language. Then there exists a “pumping constant” for such that for all with , there is a decomposition of into such that , and for all .
The main “use” of the Pumping Lemma is to prove that a language is not regular, with a simple proof by contradiction. The most standard example of non-regularity is , but another example that I have done in my recitation sections is , which follows from a simple argument of showing inequality with 2 consecutive perfect squares.
One of my students pointed me to this blog post. It claims to show that “checking whether a input number is prime or not” is regular (because there is a regular expression) for it. Of course, I was skeptical: I already knew the problem of even showing that the language consisting of all prime-length strings is not regular:
Proof: Suppose it is. Then choose with any prime number at least (the pumping length of ) + 1; note, the + 1 here is needed. A small technical detail: there may not necessarily be a prime strictly larger than ‘s pumping length. But we can guarantee a prime?s existence by the “infinitely many primes” proof by Euclid.
We want to decompose into , as usual. Whatever the decomposition is, (because of the +1 from earlier). Therefore, . Choose . It suffices to show is not prime. We have , which factors into . Since , and , then is not prime.
And of course, Theorem 2 only shows the case of unary alphabets (although the argument can be easily extended). Also note that the language above is not even context-free (a similar counting-then-show-nondivisibility argument): in fact, if it were context free, we would immediately have shown that we can check whether a number is prime or not in time (via the well-known CYK algorithm). Looking at the reference my student sent to me, I came across this line: “…The regular expression engine will backtrack and will make (11+?) match “111″…” (emphasis mine). Of course, anyone familiar with automata theory would know that backtracking is not possible here, and can only be done (as taught in standard classes) by Turing Machines.
And that got me thinking. The reason that this student thought that the language was regular was that the name “Regular Expression” is so colloquially and mis-used (as is the norm in theory…), whereas there was a Turing Machine hidden behind the scenes! Also, where else has notation or terminology been mis-used in Computer Science?
One well-known one (in that I differ from the standard opinion) is Landau notation, specifically with big-O notation, commonly used to classify some bound on some function like running time of an algorithm. Here is a standard definition of big-O notation given in an introductory algorithms class:
Definition 3 Let be two non-negative, real-valued functions with a natural number. Then we say that is big-O of , commonly written , if there are fixed constants (independent of ) such that for all .
One can note that that for some function , describes many functions, and is actually a set of functions, all with the property listed above. Here is my opinion: I hate the notation . Why? It suggests that a function is equal to a set of functions, which is simply false! I’ve heard the argument that the notation implies that is equal to some function in . But why go the extra mile? Why not just use the correct mathematical notation of ?
I completely understand the need for lackadaisical definitions: sometimes the math of the problem at-hand is too dense, and a more intuitive approach is necessary. However, I would argue the opposite for most cases here. My main argument stems from having worked in multi-person projects before: although many of us do it all the time, we should not assume that the person reading our code/requirements document/etc. knows exactly what we mean in every scenario. Even though we cannot always be semantically correct, there are places in where we can be formally correct, and I think taking advantage of those opportunities is what is most important.