# The Algebraic Analog of P vs NP

Mathematicians often isolate themselves within their specialties (the algebraic geometers vs the analysts vs the probabilists vs the theorists and so on). But some of the most brilliant results arise from applying tools from one field to problems in another.

One of the largest open problems in mathematics and computer science is the P vs NP problem. It asks whether a problem whose solution can be checked in polynomial time is actually solvable in polynomial time.  You can learn the basics about P vs NP here. Since the problem was formalized by Stephen Cook in 1971, many people in complexity theory have spent their time trying to make progress on it. So why is P vs NP unsolved? Fundamentally, some people think that the problem cannot be proved or disproved from our standard set of mathematical assumptions (i.e P vs NP might be independent of ZFC and other axiomatic systems). Additionally, several proof techniques have been proven insufficient (specifically relativizing proofs and natural proofs).

(Debatably) The most viable track for progress on P vs NP that’s still on the table (for now, anyway) is actually rooted in algebra! The idea, introduced by Leslie Valiant in 1979, is to study two classes of polynomials, VP and VNP, and see whether they’re equal. (For those whose ears are ringing at that name, this is the same Valiant who would later come up with the “probably approximately correct” PAC learning model.) Many believe that a proof for whether or not VP equals VNP would be a huge step in solving P vs NP.

## VP and VNP

To describe VP and VNP, we only need a few terms to classify some polynomials. Our multivariate polynomials will be over a field K, so just imagine the coefficients are real numbers, and in variables $x_1,x_2, \cdots$. We need a notion of complexity for a polynomial. The arithmetic operations $*,-,+$ to compute a polynomial can be set up in an arithmetic circuit along with the variables $x_1,x_2,\cdots$. For instance the arithmetic circuit of $x^2_1+2x_1x_2+x^2_2$ can be represented by either of the following two circuits.

The size of an arithmetic circuit is the number of gates, the circles in the diagrams. The total complexity of a polynomial $f_n$, denoted $L(f_n)$, is the minimum size of an arithmetic circuit needed to compute the polynomial. From the example above, you can see that the obvious arithmetic circuit for a polynomial isn’t necessarily the one whose size is smallest. Also a result of Galois proves that not all polynomials of degree 5 and higher can be factored in a finite number of arithmetic operations. So the factoring used on the quadratic above, which lessens the number of operations, cannot always be done. Lastly, a function $g$ of $n$ is polynomially bounded if $g(n) for some polynomial $p$ and sufficiently large $n$.

Now we’re ready to define the classes: VP is the class of families of polynomials for which

• the function mapping n to the degree of $f_n$ is polynomially bounded
• $L(f_n)$ is polynomially bounded
• Most definitions also require that for $v(n)$ the minimum number of variables in the polynomial $f_n$, the function mapping n to $v(f_n)$ is polynomially bounded. However this seems a consequence of $L(f_n)$ being polynomially bounded.)

Informally, VP is the class consisting of all families of polynomials which have polynomially bounded degree and complexity. So let’s see some examples and non examples:

1. The family $f_n(x_1,x_2)=(x_1+x_2)^n$ is in VP.
2. The family $f_n(x)=x^{2^n}$ is not in VP. While the polynomial can be computed in an arithmetic circuit of size $n+1$, the degree of $f_n$ is $2^n$, which is not polynomially bounded.
3. The determinant family, $D_n$, is the determinant of an $n \times n$ matrix with distinct indeterminate entries. The naïve arithmetic circuit has complexity $n!$ (see the definition for $det(A)$ below), but using Gaussian elimination on a matrix, the arithmetic circuit can be made to have size polynomial in $n$. So $D_n$ is in VP.

To refresh anyone’s memory on the determinant, for the $3 \times 3$ case:

\begin{aligned} D_3=det \left(\begin{bmatrix} x_1 & x_2 & x_3 \\ x_4 & x_5 & x_6 \\ x_7 & x_8 & x_9 \end{bmatrix} \right)=x_1x_5x_9-x_1x_8x_6-x_2x_4x_9+x_2x_6x_7-x_3x_5x_7+x_3x_4x_8. \end{aligned}

More generally, for $A$ an $n \times n$ matrix with $(i,j)$th entry $a_{i,j}$

\begin{aligned} det(A)=\sum\limits_{\sigma \in S_n}\text{sgn}(\sigma)\prod\limits_{i=1}^n a_{i,\sigma(i)}, \end{aligned}

where $S_n$ is the symmetric group, the set of permutations on $n$ elements.

VNP is the class of families of polynomials for which there exists a family $g_n \in K[x_1,\cdots,x_{u(n)}]$ in VP such that

\begin{aligned} f_n(x_1,\cdots,x_{v(n)})=\sum\limits_{e \in \{0,1\}^{u(n)-v(n)}}g_n(x_1,\cdots,x_{v(n)},e_{v(n)+1},\cdots,e_{u(n)}). \end{aligned}

In other words, given a monomial, so a term like $x_1x_2$ or $x_1^3$, we can determine the monomial’s coefficient in $f_n$ with a polynomial size circuit. As for some examples of families of functions in VNP:

1. Any family in VP lies in VNP.
2. The permanent family, denoted $P_n$, is the permanent of an $n \times n$ matrix with distinct indeterminate entries. The naïve computation for the permanent has arithmetic circuit size $n!$, while the best known has size about $2^n$. $P_n$ is in VNP.

As a reminder, for the $3 \times 3$ case:

\begin{aligned} P_3=perm \left(\begin{bmatrix} x_1 & x_2 & x_3 \\ x_4 & x_5 & x_6 \\ x_7 & x_8 & x_9 \end{bmatrix} \right)=x_1x_5x_9+x_1x_8x_6+x_2x_4x_9+x_2x_6x_7+x_3x_5x_7+x_3x_4x_8, \end{aligned}

and more generally for an $n \times n$ matrix $A$

\begin{aligned} perm(A)=\sum\limits_{\sigma \in S_n}\prod\limits_{i=1}^n a_{i,\sigma(i)}. \end{aligned}

Looking at the sum definition for $perm(A)$ and $det(A)$, the permanent sum contains the same terms as the determinant, but all the terms in the sum are added instead of the sign depending on the permutation.

The permanent family is complete in VNP over all fields $K \neq \mathbb{F}_2$, meaning that the permanent has arithmetic complexity at least as hard as every polynomial in VNP. Over the field of two elements,  $\mathbb{F}_2$, $1=-1$ and the permanent and the determinant are the same. So proving that the permanent can be computed with a small circuit, which could include proving that the permanent can be reduced to computing not too many determinants, would prove that every function in VNP can be computed with a small circuit.

## The big picture

The exact relation between P and VP or between NP and VNP should seem pretty hazy. After letting the definitions sink in, the analogs might “make sense” intuitively, but I’ve given no formal support as to why these are the “best” analogs. Or even why work on VP vs VNP gives insight to P vs NP. Some details for this are in section 9 of Valiant’s paper “Completeness Classes in Algebra“, but I’ll quote his first sentence from the section, and wave a hand at the technical details:

“… the formal Boolean analog of the previously defined algebraic notion of p-definability [functions in VNP] is closely related to the concept of nondeterminism as traditionally applied to discrete computations.”

So again, it’s not true that VP ≄VNP implies P ≄NP (although the converse is true), but the Boolean analogs of VP and VNP are closely related to P and NP, respectively.

## P.S

After posting this a friend referred me to a survey by Scott Aaronson on the state of P vs NP. There’s a new (as of 2015) approach called Ironic Complexity Theory, which has been successful in proving circuit lower bounds. I plan on learning about this soon, but check out the survey for more information.

Advertisements

## 4 thoughts on “The Algebraic Analog of P vs NP”

1. Eric says:

Nice intro!

> VPN is the class of families of polynomials for which there exists a family

I assume you mean VNP 😛

Like

2. I think the first determinant is wrong: x1*x7*x8, x7 and x1 are on the same column.

Like