Star height problem

From HandWiki
Short description: Can all regular languages be expressed using regular expressions of limited star height?

The star height problem in formal language theory is the question whether all regular languages can be expressed using regular expressions of limited star height, i.e. with a limited nesting depth of Kleene stars. Specifically, is a nesting depth of one always sufficient? If not, is there an algorithm to determine how many are required? The problem was raised by (Eggan 1963).

Families of regular languages with unbounded star height

The first question was answered in the negative when in 1963, Eggan gave examples of regular languages of star height n for every n. Here, the star height h(L) of a regular language L is defined as the minimum star height among all regular expressions representing L. The first few languages found by (Eggan 1963) are described in the following, by means of giving a regular expression for each language:

[math]\displaystyle{ \begin{alignat}{2} e_1 &= a_1^* \\ e_2 &= \left(a_1^*a_2^*a_3\right)^*\\ e_3 &= \left(\left(a_1^*a_2^*a_3\right)^*\left(a_4^*a_5^*a_6\right)^*a_7\right)^*\\ e_4 &= \left( \left(\left(a_1^*a_2^*a_3\right)^*\left(a_4^*a_5^*a_6\right)^*a_7\right)^* \left(\left(a_8^*a_9^*a_{10}\right)^*\left(a_{11}^*a_{12}^*a_{13}\right)^*a_{14}\right)^* a_{15}\right)^* \end{alignat} }[/math]

The construction principle for these expressions is that expression [math]\displaystyle{ e_{n+1} }[/math] is obtained by concatenating two copies of [math]\displaystyle{ e_n }[/math], appropriately renaming the letters of the second copy using fresh alphabet symbols, concatenating the result with another fresh alphabet symbol, and then by surrounding the resulting expression with a Kleene star. The remaining, more difficult part, is to prove that for [math]\displaystyle{ e_n }[/math] there is no equivalent regular expression of star height less than n; a proof is given in (Eggan 1963).

However, Eggan's examples use a large alphabet, of size 2n-1 for the language with star height n. He thus asked whether we can also find examples over binary alphabets. This was proved to be true shortly afterwards by (Dejean Schützenberger). Their examples can be described by an inductively defined family of regular expressions over the binary alphabet [math]\displaystyle{ \{a,b\} }[/math] as follows–cf. (Salomaa 1981):

[math]\displaystyle{ \begin{alignat}{2} e_1 & = (ab)^* \\ e_2 & = \left(aa(ab)^*bb(ab)^*\right)^* \\ e_3 & = \left(aaaa \left(aa(ab)^*bb(ab)^*\right)^* bbbb \left(aa(ab)^*bb(ab)^*\right)^*\right)^* \\ \, & \cdots \\ e_{n+1} & = (\,\underbrace{a\cdots a}_{2^n}\, \cdot \, e_n\, \cdot\, \underbrace{b\cdots b}_{2^n}\, \cdot\, e_n \,)^* \end{alignat} }[/math]

Again, a rigorous proof is needed for the fact that [math]\displaystyle{ e_n }[/math] does not admit an equivalent regular expression of lower star height. Proofs are given by (Dejean Schützenberger) and by (Salomaa 1981).

Computing the star height of regular languages

In contrast, the second question turned out to be much more difficult, and the question became a famous open problem in formal language theory for over two decades (Brzozowski 1980). For years, there was only little progress. The pure-group languages were the first interesting family of regular languages for which the star height problem was proved to be decidable (McNaughton 1967). But the general problem remained open for more than 25 years until it was settled by Hashiguchi, who in 1988 published an algorithm to determine the star height of any regular language. The algorithm wasn't at all practical, being of non-elementary complexity. To illustrate the immense resource consumptions of that algorithm, Lombardy and Sakarovitch (2002) give some actual numbers:

{{{1}}}

Notice that alone the number [math]\displaystyle{ 10^{10^{10}} }[/math] has 10 billion zeros when written down in decimal notation, and is already by far larger than the number of atoms in the observable universe.

A much more efficient algorithm than Hashiguchi's procedure was devised by Kirsten in 2005. This algorithm runs, for a given nondeterministic finite automaton as input, within double-exponential space. Yet the resource requirements of this algorithm still greatly exceed the margins of what is considered practically feasible.

This algorithm has been optimized and generalized to trees by Colcombet and Löding in 2008 (Colcombet Löding), as part of the theory of regular cost functions. It has been implemented in 2017 in the tool suite Stamina.[1]

See also

References

  1. Nathanaël Fijalkow, Hugo Gimbert, Edon Kelmendi, Denis Kuperberg: "Stamina: Stabilisation Monoids in Automata Theory". CIAA 2017: 101-112 Tool available at https://github.com/nathanael-fijalkow/stamina/

Works cited

Further reading