Contact - Andrius Kulikauskas
- ms@ms.lt
- +370 607 27 665
- Eičiūnų km, Alytaus raj, Lithuania
Support me! - Patreon... in September?
- Paypal to ms@ms.lt
- Bookshelf
Thank you! |
I am learning, contemplating and writing up a derivation of the classification of Sheffer polynomials. Here are steps in the derivation. Preliminaries
The coefficients of the polynomials are presumably rational numbers. A polynomial set is a sequence of polynomials {$\{P_n(x):n=0,1,2...\}$} in which the degree of {$P_n(x)=n$} for all {$n$}. Polynomial sets are considered equivalent if they are obtainable from each other by a linear change of the independent variable {$x$} and/or a rescaling of the dependent variable (What is meant by dependent variable?) A polynomial set is orthogonal with respect to a postive measure {$\textrm{d}\alpha$} if {$\int_{-\infty}^{\infty}P_n(x)P_m(x)\textrm{d}\alpha(x)=h_n\delta_{nm}$} where the moments {$\mu_n$} of {$\textrm{d}\alpha$}, {$\mu_n=\int_{-\infty}^{\infty}x^n\textrm{d}\alpha$}, exist for all {$n$}.
How do orthogonal polynomials express the spectral theorem?
This recurrence relation for orthogonal polynomials is derived by expressing {$xP_n(x)$} in terms of the orthogonal polynomials as they form a basis for all polynomials. We observe that {$xP_n(x)$} has degree {$n+1$} and thus the expression cannot have any terms of higher degree. Nor can it have any terms of degree {$k<n-1$} as can be seen by linearizing {$xP_n(x)P_k(x)$}, which must yield {$0$}. {$x$} is the quantum. Thus it is meaningful to focus on {$xP_n(x)$} and thus contemplate the following expression: {$$xP_n(x) = P_{n+1}(x) + A_n P_n(x) + B_n P_{n-1}(x)$$}
{$P_n{x}$} does not depend on terms lower than {$n-2$}. This apparently imposes a quadratic growth on what otherwise could be growing exponentially in size and complexity. Consider... - {$\int P_n(x)P_k(x) d\alpha = c_k\delta_{nk}$}
- {$f(x)=f_0P_0(x)+f_1P_1(x)+\cdots+f_{n-1}P_{n-1}(x)$}
- {$\int P_n(x)f(x) d\alpha = 0$}
- look at {$\int xP_n(x)P_k(x) d\alpha$} This becomes an argument about {$P_n$}, and it says suppose that {$n\geq k$}, then because {$xP_k(x)$} has degree {$k+1$}, so if {$n>k+1$}, which means {$n-1 > k$}, then it will be {$0$}.
We get thus a chain where each polynomial is related to the polynomials directly above and below. This gives a way of thinking about a Dynkin diagram (especially their chains) in terms of raising operators {$x$} and lowering operators {$\frac{\textrm{d}}{\textrm{dx}}$}. A raising operator adds a new cell. A lowering operator removes any one of the existing {$n$} cells, thus marking it.
We can then make evident the recurrence relation: {$$P_{n+1}(x) = (x - A_n) P_n(x) - B_n P_{n-1}(x)$$} This is like climbing a ladder with a left foot on the rung below and the right foot on the rung below that. Your weight can be on either foot or both.
Given a weight, it is possible to calculate its moments. Then the orthogonal polynomials associated to the weight can be calculated by a determinant of the moments. This suggests why the combinatorics of orthogonal polynomials are based on permutations and their statistics. I should investigate that. It also shows that the problem of classifying orthogonal polynomials is really the matter of classifying the weights by which the originate. The weight is the relevant space. I found Chihara's, Viennot's and Kim and Zeng's combinatorial interpretations of the moments.
I want to classify orthogonal polynomial sets {$\{P_n(x)\}$} of the form {$$\sum_{n=0}^{\infty}P_n(x)t^n=A(t)e^{xu(t)}$$} where {$A(t)=\sum_{n=0}^{\infty}a_nt^n$}, {$a_0=1$}, and {$u(t)=\sum_{n=0}^{\infty}u_nt^n$}, {$u_0=0$}, {$u_1=1$}. The generating function {$e^{xu(t)}$} is precisely that for polynomials of binomial type. The coefficients are precisely those given by Bell polynomials on scalars {$a_1, a_2, \cdots$} so that {$p_n(x)=\sum_{k=1}^{n=\infty}B_{n,k}(a_1,\cdots,a_{n-k+1})x^k$}. Furthermore, if we set {$P(t)=\sum_{n=1}^{\infty}\frac{a_n}{n!}t^n$}, then {$P^{-1}(\frac{\textrm{d}}{\textrm{dx}})$} is the delta operator of this sequence. The form {$A(t)e^{xu(t)}$} is consequential in that it establishes an additional constraint above and beyond orthogonality which, working together, the two constraints yield very narrow possibilities. This form is remarkably constricting in how it links together the various powers of {$x$} and {$t$} when combined with orthogonality. Perhaps this is because orthogonality is a quadratic relationship, and here it is being combined with an exponential relationship. The weights are exponential, they have an analytic symmetry, they are reproduced by differentiation, along with an additional factor, and they overwhelm any polynomial factors as we go out to infinity. The quadratic constraint is a general theme, as with the tridiagonality of the Jacobi operator. I suspect the same theme is behind the restrictiveness of the Dynkin diagrams, which could be understood as chains of dimensions on which raising and lowering operators act. A might exponential horse is fettered by quadratic orthogonal shackles which keep one leg from extending too far from another.
How does {$A(t)e^{xu(t)}$} relate to Lie groups and Lie algebras? - Lie group: The exponential map The exponential map from the Lie algebra to the Lie group is not always onto, even if the group is connected (though it does map onto the Lie group for connected groups that are either compact or nilpotent)
- Exponential map (Lie theory) If G is compact, it has a Riemannian metric invariant under left and right translations, and the Lie-theoretic exponential map for G coincides with the exponential map of this Riemannian metric.
- For {$X$} in {$\mathfrak{g}$}, there is a unique Lie group homomorphism {$t\rightarrow\textrm{exp}(tX)$} corresponding to the Lie algebra homomorphism {$t\rightarrow tX$}.
- Lie group - Lie algebra correspondence
This suggests that {$xu(t)$} is an element of a Lie algebra. The Lie group is compact if {$A(t)=1$} and not compact otherwise. Thus {$A(t)$} is the factor that expresses noncompactness.
It is interesting to consider the difference between the Sheffer polynomials and the classical orthogonal polynomials. - The Hermite polynomials and Laguerre polynomials ar both Sheffer and classical.
- The Jacobi polynomials are classical but not Sheffer.
- The Meixner polynomials and Meixner-Pollacek polynomials are Sheffer but not classical.
The classical orthogonal polynomials are A helpful paper is available: Kil H. Kwon, Lance L. Littlejohn. Classification of Classical Orthogonal Polynomials. 1997. Journal of Korean Math Society. The classical orthogonal polynomials are solutions to the differential equation: {$$L = l_2(x)\frac{\textrm{d}^2}{\textrm{dx}^2} + l_1(x)\frac{\textrm{d}}{\textrm{dx}}$$} {$$Ly(x)=\lambda_ny(x)$$} {$$L[y](x) = l_2(x)y''(x) + l_1(x)y' (x) = \lambda_n y(x)$$} This is part of Sturm-Liouville theory, which deals with equations of the form: {$$\frac{\textrm{d}}{\textrm{dx}}[p(x)\frac{\textrm{dy}}{\textrm{dx}}] + q(x)y = - \lambda w(x)y$$} The main result concerns the situation where {$p(x)$} and {$w(x)$} are continuous functions over the finite interval {$[a,b]$} with some boundary requirements. It states that the eigenvalues {$\lambda_n$} are real and distinct, that the eigenfunction {$y_n(x)$} for {$\lambda_n$} has exactly {$n-1$} zeroes in {$(a,b)$}, and the normalized eigenfunctions form an orthonormal basis of the Hilbert space {$L^2([a,b],w(x)dx)$} under the w-weighted inner product. The time-independent Schroedinger equation is handled by this theory. The Fivefold Classification
{$\{P_n(x)\}$} is a Sheffer orthogonal polynomial set iff it satisfies the recurrence relation: {$$P_{n+1}(x)=[x-(ln+f)]P_n(x)-n(kn+c)P_{n-1}(x)$$} As shown by Galiffa and Riston, this recurrence relation is determined by {$a_1$}, {$a_2$}, {$u_2$}, {$u_3$}. Namely, {$f=-a_1$}, {$l=-2u_2$}, {$c=a_1^2-2a_2+2a_1u_2$}, {$k=4u_2^2-3u_3$} We can turn these equations around to give combinatorial meaning to the initial terms of {$A(t)$} and {$u(t)$}. {$a_1 = -f$}, {$a_2 = \frac{f^2+fl-c}{2}$}, {$u_2=\frac{-l}{2}$}, {$u_3=\frac{l^2-k}{3}$}. We can suppose {$f=0$} by considering the translation of {$x$} to {$(x-f)$}. Then {$a_2=\frac{c}{2}$}. This yields: {$$A(t)=1 - \frac{c}{2}t^2 + a_3t^3 + a_4^t4 + \cdots$$} {$$u(t)=t - \frac{l}{2}t^2 + \frac{l^2-k}{3}t^3 + a_4^t4 + \cdots$$} Note that the higher order terms are not relevant, in that they are determined by the initial terms. Then we get the most natural and profound expressions for {$u'(t)$} and {$A'(t)$}. {$A'(t) = - ct + \textrm{echo terms}$} {$u'(t) = 1 - lt + (l^2-k)t^2 + \textrm{echo terms}$} Alternatively, factoring {$1+lt+kt^2=(1-\alpha t)(1-\beta t)$}, we have {$l=(-\alpha)+(-\beta)$} and {$k=(-\alpha)(-\beta)$}. Thus we have {$$u'(t) = 1 -(\alpha + \beta) t + (\alpha^2 + \alpha\beta + \beta^2)t^2 + \textrm{echo terms}$$} I mean by "echo terms" that the higher order terms are completely determined by the initial terms and have no independent significance. Note that we can interpret {$l$} to mean {$\textrm{NAND(A,B)}$}, for (NOT (A AND B) = (NOT A) OR (NOT B), and we can interpret {$k$} as {$\textrm{NOR(A,B)}$}, for (NOT (A OR B) = (NOT A) AND (NOT B). NAND and NOR are the two logical operators from which all other operators can be derived. And algebraically, we get an interesting logic where NAND(A,B) = (-A)(-B) = AB = AND(A,B). I should study this logic. It is meaningful to focus on {$xP_n(x)$} and thus contemplate the following expression: {$$xP_n(x) = P_{n+1}(x) + (nl + f) P_n(x) + n(nk + c) P_{n-1}(x)$$}
Here are my notes for deriving the differential equation relating {$A(t)$} and {$A'(t)$}. Apparently, this is essentially the same as the differential equation relating {$f(s)=\sum_{n=1}^{\infty}\frac{P_n(0)}{n!}s^n$} and {$f'(s)=\sum_{n=0}^{\infty}\frac{P_{n+1}(0)}{n!}s^n$}. The idea is to sum over the recurrence relation and then identify {$f(s)$} and {$f'(s)$}, place them on either side, and divide. The initial terms of the series yield a constant, but fortunately that constant is {$0$}. For {$u'(t)$} it is important to appreciate that {$u'(t)=\frac{\textrm{du}}{\textrm{dt}}$} and so {$$t'(u)=\frac{\textrm{dt}}{\textrm{du}}=\frac{1}{u'(t)}$$}
{$\{P_n(x)\}$} is an orthogonal polynomial set iff {$u(t)$} and {$A(t)$} satisfy the differential equations where {$(1-\alpha t)(1-\beta t)=1+lt+kt^2$} is a polynomial with real coefficients but possibly complex roots: {$$u'(t)=\frac{1}{(1-\alpha t)(1-\beta t)}=\frac{1}{1+lt+kt^2}=\frac{A'(t)}{-(c+k)tA(t)}=\frac{A'(t)}{\gamma tA(t)}$$} Here {$A(t)=\sum_{n=0}^{\infty}\frac{P_n(0)}{n!}t^n$}. Thus, given a choice of steps {$\alpha$} and {$\beta$}, a link {$l$} expresses "or" {$-(\alpha + \beta)$} with a negative weight, and a kink {$k$} expresses "and" {$\alpha\beta$}. What is the meaning of {$c+k$}? I should read Differential equation and learn more about them. I can also study Bachioua Lahcene. On Pearson families of distributions and its applications. Here the classification is a straightforward analysis of the possibilities for {$u'(t)$}:
Note that {$u(0)=0$} determines the constant of integration. For the Laguerre polynomials, it is {$-\frac{1}{\alpha}$}, and in every other case it is {$0$}. We have the Taylor series expression {$-\textrm{ln}(1-\alpha t)=\alpha t - \frac{1}{2}\alpha^2t^2 + \frac{1}{3}\alpha^3t^3 - \frac{1}{4}\alpha^4t^4 +\cdots$}. The most general generating function is for the Meixner polynomials: {$$\frac{1}{\beta - \alpha}(\textrm{ln}(1-\alpha t) - \textrm{ln}(1-\beta t)) = -t + \frac{1}{2}t^2(\alpha+\beta)-\frac{1}{3}t^3(\alpha^2+\alpha\beta+\beta^2) + \frac{1}{4}t^4(\alpha^3+\alpha^2\beta+\alpha\beta^2+\beta^3)+\cdots$$} On the right hand side, I have cancelled {$\beta - \alpha$} from the numerator and denominator. From the resulting generating function I can derive the cases for the Hermite, Charlier and Laguerre cases. But so far I don't see how I can get the Meixner-Pollaczek generating function by setting {$\beta=\overline{\alpha}$}.
The Meixner generating function can be written: {$$(\frac{1}{1-\alpha t})^{\frac{1}{\alpha-\beta}(\frac{\gamma}{\alpha}+x)}(\frac{1}{1-\beta t})^{\frac{1}{\beta-\alpha}(\frac{\gamma}{\beta}+x)}$$} I suppose this is the most general form from which the others can be specialized. I'm curious how this relates to the Jacobi polynomials, whose weight {$(1-x)^a(1+x)^b$} seems very much related to this expression.
I studied how the variables {$a$}, {$b$}, {$c$}, {$d$} that Galiffa & Riston use in their recurrence relation appear in their generating functions. But it seems to me, upon inspection, that they aren't the same variables, unfortunately.
Note that in the Meixner-Pollaczek case we have {$\alpha=a+bi$} and {$|\alpha|=a^2+b^2$} and we are dealing with a circle, as with {$\textrm{tan} \theta$}. What is the meaning of {$\textrm{tan}^{-1}$}? Compare with the Pearson system for classifying distributions as solutions to the differential equation {$$\frac{\textrm{dy}}{\textrm{dx}} = \frac{y(m-x)}{a+bx+cx^2}$$} Viennot's Combinatorial Interpretation Relating the five cases Understand how the various cases relate to each other, for example, in taking limits, or in perturbing degeneracies in ways so that they are nondegenerate. Further steps Further steps I can take... - I will write to India to ask how to contact Xavier Viennot and who is working on related combinatorics.
- How can I relate Zeng's interpretation of {$P_n(x)$} with pairs of combinatorial interpretations for {$A(t)$} and {$e^{xu(t)}$}?
- Learn about the Pearson system of distributions.
- Explore what happens when we successively integrate {$\textrm{ln} x$}.
Sources Xavier Viennot overviews the combinatorics of the moments of orthogonal polynomials: Hermite (involutions), Laguerre (permutations), Charlier (partitions), Meixner (ordered partitions), Meixner-Pollaczek (secant = even alternating permutations), Chebyshev I (ways of choosing half), Chebyshev II (Catalan). |

Šis puslapis paskutinį kartą keistas May 31, 2021, at 04:33 PM