Skip to content
Confusion in the de...
Clear all

Confusion in the definition of Ordinal Arithmetic

1 Posts
1 Users
Illustrious Member
Joined: 4 months ago
Posts: 57395
Topic starter  


I'm trying to fill in some gaps regarding the definition of ordinal arithmetic. In particular, we define
$$ alpha +_{ON} beta =
alpha & beta = 0, \
S(alpha +_{ON} gamma) & text{if } beta = S(gamma), \
cup({alpha +_{ON} delta : delta < beta}) & text{if $beta$ is a limit ordinal.}

By the transfinite recursion theorem, I understand that this is a sufficient definition of ordinal addition, since the function furnished by the theorem is unique. More specifically, for each $alpha in ON$ we take the class function $F: V rightarrow V$ given by $F(x) = alpha$ if $x$ is $0$, $F(x) = S(F(beta))$ if $x$ is a function with domain the successor ordinal $S(beta)$, $F(x) = cup(text{ran}(x))$ if x is a function with domain a limit ordinal, and $F(x) = 0$ otherwise. This $F$ produces the unique class function $g:ONrightarrow V$ such that $g(delta) = F(g upharpoonright delta)$. Then $g(beta) = alpha +_{ON} beta$.

My question is, how can we show/know that $F$ is actually a class function, since $F$ itself is defined recursively (when the input is a function with successor ordinal as its domain)?



Unreplied Posts

Can generating functions be used to solve evolution matrix differential equations and recurrence relations of matrices?


Generating functions seem to be a powerful tool in discrete mathematics for solving differential equations and recurrence relations. I’ve been trying to figure out if these methods can be expanded to differential equations that involve matrices, such as Schrodinger’s equation. Is there anything that prevents a solution to these types of differential equations being written using generating functions? For example, given a solution to Schrodinger’s equation as

$$ U = Te^{-i hbar int_{0}^{t’} H(t) ,dt} $$

Where H(t) is some time-dependent Hamiltonian matrix and T is the time-ordering operator. This form seems very reminiscent to exponential generating functions. For example, the generating function for the Bessel functions is given by $$ e^{frac{x}{2}(t-frac{1}{t})}=sum_{n=-infty }^{infty} J_n(x)t^n $$

So, if the time evolution of the Hamiltonian gave something similar to the generating function above, where x is now the matrix Hamiltonian, can the exponential be re-written in a form using the Bessel functions of matrix argument? I’d assume that the matrix argument of a Bessel function would be similar to an analytic function of matrix argument, but everything I find seems to write Bessel/Hypergeometric functions in terms of zonal polynomials, and the wikipedia page for hypergeometric functions of matrix argument even mention that these types of functions aren’t similar to writing other functions in terms of matrix arguments. That doesn’t make sense to me though since these special functions have these generating function relations. Any help would be appreciated if someone could point me towards the literature too.


there a 1px red line on my monitor screen

it might sound kinda stupid I know, “it’s just a line at the top” but it bothers me because of the toc.

obs: it interacts with what’s behind it, it’s not a simple line. sometimes it looks like it’s turning off and on smoothly


about it, it’s not from the browser, although it seems, I used a program that I have that deletes everything, I deleted the browser, and after I restarted the computer limiting it to microsoft applications, the line continued.
(it doesn’t appear on screenshots)

Calculating L-smoothness constant for logistic regression.


I am trying to find the $L$-smoothness constant of the following function (logistic regression cost function) in order to run gradient descent with an appropriate stepsize.

The function is given as $f(x)=-frac{1}{m} sum_{i=1}^mleft(y_i log left(sleft(a_i^{top} xright)right)+left(1-y_iright) log left(1-sleft(a_i^{top} xright)right)right)+frac{gamma}{2}|x|^2$ where $a_i in mathbb{R}^n, y_i in{0,1}$,$s(z)=frac{1}{1+exp (-z)}$ is the sigmoid function.

The gradient is given as
$nabla f(x)=frac{1}{m} sum_{i=1}^m a_ileft(sleft(a_i^{top} xright)-y_iright)+gamma x $.

My ideas was that the smoothness constant $L$ has to be bigger than all the eigenvalues of the hermitian of the given function, this follows from the fact that if $f$ is $L$-smooth, $g(x)=frac{L}{2} x^T x-f(x)$ is a convex function and therefore the hessian has to be positive semi-definite.
The second-order partial derivatives of $f$ are given as

$ frac{partial^2 }{partial x_k partial x_j}f(x)=frac{1}{m} sum_{i=1}^ms(a_i^{top} x)left(1-s(a_i^{top} x)right)[a_i]_k[a_i]_j+gammadelta_{ij} $

from the following github post ( i know that $ L=frac{1}{4} lambda_{max }left(A^{top} Aright)+gamma$ , where $lambda_{max }$ denotes the largest eigenvalue, which seems good since i figured out that $s(a_i^{top} x)left(1-s(a_i^{top} x)right)leq frac{1}{4}$ for all $x$.

But i am not able to fit everything together. I would appreciate any help.


GnuCash – Help Buttons Not Working

GnuCash 2.6.15 – Debian Stretch

gnucash-docs and yelp packages installed.

While in GnuCash, when I activate a sub-window “Help” button (e.g. as seen by clicking Edit -> Find… -> Help), the mouse pointer changes from a pointer icon to the active processing icon for about 15 seconds. It then changes back to a pointer icon without any other action. No help dialog is created.

However, when clicking (on the main toolbar menu) Help -> Tutorial and Concepts Guide, said guide comes up as is should!

I suspect I may be missing a package, but which one?

Ultrafilters and compactness


A topological space is compact if and only if every ultrafilter is convergent.

While I was reading the proof of the one Side of theorem above, there is something I could not understand. Following is the proof of of the one side of the theorem.

Let $X$ be compact and assume that $mathcal{F}$ is the ultrafilter on $X$ without a limit point. Then for each $xin X$, there exists an open neighborhood $U_{x}$ of it such that each $U_{x}$ does not contain any member of $mathcal{F}$. Since $mathcal{U}={U_{x} : xin X}$ is an open cover of $X$, there exists a finite subfamily ${U_{x_{i}}: i=1,2,…,n}$ of $mathcal{U}$ such that $X=bigcup_{i=1}^{n} U_{x_{i}}$. Let $Ainmathcal{F}$ be fixed. Then $A=(Acap U_{x_{1}})cap (Acap U_{x_{2}})…(Acap U_{x_{n}})inmathcal{F}$ and thus there exists an $iin{1,2,…,n}$ such that the subset $Acap U_{x_{i}}$ is in $mathcal{F}$ which is a contradiction.

The thing that I could not understand, why there exists $iin{1,2,…,n}$ such that $Acap U_{x_{i}}$ must be in $mathcal{F}$? If you clarify this, it would highly be appreciated. Thank you.


Representing $G=text{GL}^+(2,mathbf R)$ as the matrix product $G=TH$. If $H=text{SO}(2)$, what is $T$?


In this paper (Equation 2.6 and 2.7) the author seems to suggest that one can represent the $text{GL}^+(4,mathbf R)$ group using the product of two exponentials: $exp (epsilon cdot T) exp (u cdot J)$, where $T$ are the generators of shears and dilation, and $J$ are the generators of Lorentz transformations.

My take on the subject is that since $T$ and $J$ do not commute, one cannot write $G$ as a product of these two exponentials. One must instead write $G=exp ( epsilon cdot T + u cdot J )$. It appears to me the author is wrong.

Is the author correct, or am I?

How can I represent $text{GL}^+(2,mathbf R)$ as the matrix product $G=TH$ where $H=text{SO}(2)$?


Bounds on the maximum real root of a polynomial with coefficients $-1,0,1$


Suppose I have a polynomial that is given a form
f(x)=x^n – a_{n-1}x^{n-1} – ldots – a_1x – 1

where each $a_k$ can be either $0,1$.

I’ve tried a bunch of examples and found that the maximum real root seems to be between $1,2$, but as for specifics of a polynomial of this structure I am not aware.

Using IVT, we can see pretty simply that $f(1)leq0$ and $f(2)> 0$ so there has to be a root on this interval, but thats a pretty wide range was wondering if this was previously studied