Saturday, May 28, 2022

Bias vs Low Rank of Polynomials with…Algebraic Geometry – Abhishek Bhowmick

All right thanks ryan so today we'll look at uh three of these uh concepts polynomials coding theory and ultratelling such and this is joint work with shahar lovett so the organization of the talk is as follows yeah so so pseudo randomness of polynomials ah algebraic geometry coding .

Theory and finally a summary of the results ok so what is pseudo randomness ah of polynomials ah ah consider uniform random variable x over f p now ah now we know that expectation of e of x .

Is 0 since x is uniformly random where e of x is e to the 2 pi i x by p ok so then you want something similar to hold for pseudo random degree d polynomials you want expectation ah so this is going to be the setting throughout so i am going to keep this part on the board so .

We are always going to look at ah n variate functions from f to the n to f and we look at ah degree d polynomials and important in this talk is d is fixed and ah so now .

We want to say that what does it mean for a degree d polynomial to be pseudo random well ah it means that every so here f is f p it means that every ah every field value is taken with the same probability right and for that you need something like .

Right you want something like this to hold the output is like very random yeah ok so the degree is fixed throughout the talk and but then ah so what happens if ah if this is not random you know so what happens if this is actually large .

It means its not random so it has to be structured right so what can you say quantitatively about the structure oh this is the complex the unit circle so so its ah e of 0 e of 1 e of minus 1. so e is a shorthand for .

E to the 2 pi i x by p right what is a a is like some scalar phase shift so for all a so a is just a is an fp star yeah so you want this to be small for all a .

Right only then f of x would be like a completely random function you don't want it to be large so okay i mean it's a very so it's because if i look at probability f of x equals c it can be captured as ah summation a e of a f of x .

So that's how it works that's that's uh this is what f of x equals c breaks down is right because now if i ensure all of these are small then only the trivial term equals 0 survives so it's like 1 by p plus right so that's why .

Usually this is an equivalent way of stating so what happens ah if it's large like can you conclude some structure about f you know and this is a central problem in algebraic geometry and higher order fourier analysis ok so again degree d polynomial d is .

Fixed so let us say it is like the bias is really large so the reason i say it's really large is because this is the this is like the variance bound right i mean so there are like p random variables and you would expect .

This kind of standard deviation so so in such a case you can say that f is actually a composition of only one low degree polynomial and in fact we know what it looks like and this is the whale and the lean bound so so they say that um so .

If then right so so but but this is like really ah this is like a really large requirement on the bias it says that if your function is not of this form then the expectation is very small .

Which means that if the expectation is large f is probably know f is then a function of just one low degree polynomial h of x so we want to know about constant say because usually you would need a big constant thanks b to say one half right okay so i cheated here so i should have .

Written uh d over square root p but ah just to put this in perspective i had to drop the d because i am going to keep d fixed throughout and p is going to vary ok so so my next question would be what if i replace the .

Half by something larger lets say ten then can you say something in so can you say something like well then f of x is not a function of one h of x but maybe c h of x is c lower degree polynomial and this .

Lower than d yeah because you cannot allow degree d because then it's you can just write f of x is f of x right and ah yeah and you cannot say it has to be degree one because trivially its a function of n variables so it is really very large so you are fine even if its just less than d .

And that seems to be enough for all the machinery so something like this was proved by greentown kaufman lovett so they showed that ah so they did show something like this where if f is not a function of .

So i am writing the counter positive on the board and positive here just whichever view you like more but then the c here has very bad dependence on d k and p right so so k is this and d is the degree and p is the field size but the nice thing is it's independent of n .

Right but the dependence is ackerman so so even logarithmically growing field sizes are not enough it is going to break down it is going to make it look trivial right because trivially f is a function of n linears right its an invariant function so c has to be much much less than n to .

Make any non trivial contribution but since the ackermann on p and the dependence on p in accurate so it is like really weak so you want something more efficient and this kind of ah theorem is very very important in higher order fourier .

Analysis ok so so next we are trying to we are trying to make it more efficient in p and so surprisingly we are able to remove any dependence on p and ah so this is the main theorem of .

Our work and i would like you to compare this to the k equals half setting and c equals one so so recall that when k was half it was the one over square root p barrier and then we saw that f of x was exactly a function of one low degree polynomial .

So here its saying that okay i'm gonna go so if k is larger then this is what you have like no dependence on field size no dependence on n it has to depend on d and k but but this constant is not that great .

That's why i had to hide it i had to write c of something it's ackerman okay so so proof outline i am going to show how you can almost get to a decomposition but not quite and ah so this part is very very straight forward so you have to basically play with the .

Derivative structure so so if any part is not clear i am going to use the board okay so you have this ah you start with a low degree polynomial that is biased like 1 by p to the k all right so you define the derivative in direction y so its f of x plus y minus f of x .

Ok so so for fixed x i am going to look at the expected value of this random derivative in direction y all right so yeah so i just expanded out d y of f x as f of x plus y minus f of x right now if you see x is fixed here so i can pull it out of the expectation and what i'm .

Left with is just the expected value of f right because x is fixed and y is it's random because what if f is divisible if it's a divisible by x one minus one then already it will be zero with probability one over p just because uh .

Because of x one yes so if it's x one minus one times x two minus one times something yeah so then then it is it is uh so this is biased right whose probability yes but but it is a function of ah few it is a function of uh .

Just two don't care right in in times a another product where this could be of degree d1 yeah so then this would be a function of x minus 1 x minus 2 and this other thing so 3. so it's very lowering okay so for fixed x uh we have this .

Thing is equal to something fixed times the bias of the function let us call it mu so we have this is equal to ah mu times e to the minus f x right where mu is the bias of the function and we know that mu is large just to keep it in mind so if i just rearrange stuff i get e to .

The minus f x equals this so it's kind of a recipe of computing f of x you using lower degree polynomials so we are not quite there yet but it gives you an idea like this how you are going to cook up f of x is value by looking at these lower degree polynomials but again this is too much there are like too many derivatives right so first .

Step is to approximate this and you will just do some kind of chebyshev ah inequality so we are going to approximate this by capital n random derivatives and just taking the average right so so my new recipe of ah .

Yeah and mu is large so we can allow some kind of larger error in our estimate like inverse poly in p and also there is resolution in ah between the like two consecutive e of i e of a and e f b so so finally we have f of x .

Is this one oh it means that 2 e of so e of a and e of b are like at least one over p apart oh uh so so you're asking like what is yeah like two of these uh .

Complex numbers are at least one over p apart so so you need some kind of resolution right to figure out what i'm closest to because the way i'm going to guess f of x is it's the largest it's the a that minimizes this value right i am going to compute this expression .

And i am going to see well which e of am i closest to and that is going to be my f of x right so this is giving you a way of computing f of x for a fixed x using capital n many lower degree polynomials but there are problems one is that capital n is poly in p to make the .

Chebyshev argument work but which is too large you need something independent of p so to fix it we are going to choose a random fixed dimension subspace and choose all the y i's from that subspace ok so if your .

If the number of y i's you need is like ah let us say p to the s so now i am going to pick a random s-dimension subspace and basically take everything in a okay exactly .

Right so then you then what did we gain we are still taking poly p points right but we gain just in the structure so we're going to use that later so he's right so you're he's saying that well even if you take everything in a it's still p to the s right but now those y i's are more structured .

And the way that helps us is ah so you can show that only constantly many d y i's determine the rest of the dyis so so technically you are taking all these but you can show that ah there is a special interpolating set .

Which is constant in size such that a if i know the value of f for all y in s everything else gets fixed so there's a lot of redundant duis i was actually asking before that even if i allow dependence on p .

This just gives me something which is called the most for most yes yes yeah so that's another problem right so uh but even with all this what we have achieved is uh for a fixed x uh this statement holds with high probability over the y eyes right and therefore .

Ah for some setting of y is we have this probability over x right so we have almost ah solved the decomposition but only like like high probability over x we still don't have exact computation right .

So this is the first step of the proof and ah so the next step is to basically get this to exactly equal to 1 and so i'm not going to present the proof here but it requires some higher order for the analysis .

Which i'll introduce later but for some other purpose but it follows the proof structure of green tau the from here getting it down to equal to one and basically you are going to ultimately have f of x equals this .

For all x right but these are the starting polynomials you work with this derivative problem is you are going to make more refinements as you go on and make it this near miss resulting to an always correct result what is capital n you mean .

Um it's going to be acura man in d and uh k okay k is the bias like 1 by p to the k well the tower bound is a like very weak lower bond so it's much much worse than a tower .

Bond so even if d is like three it becomes like uh doubly exponential now for d3 exponential d4 it's uh this is uh so i think when you hit d equals five .

You get a tower in k a tower of length k yeah and it it's already very bad there okay well it's probably so it's not tuckerman function i think it's something hierarchy yeah right right yeah .

Yeah you're right for lower d we are still good but yeah we are in that range okay so this is going to be the central theorem and now we are going to look at we're going to look at some applications in algebraic geometry and coding theory and .

So i'm going to keep the central theorem here and then i'll move to the applications so if if this is ah if its biased by more than one by p to the k then f is .

A function of a constantly many low degree polynomials where c is dependent on d and k ok all right so let us look at another student search so the hilbert nurture insert so so you have these k polynomials .

Of degree d and then you have another polynomial h of x and h of x is given to be the promises h of x vanishes on the common series of f i's so then you know there exists integer r so there exist integer r and d capital d such that you have these multiplied polynomials g .

1 through g k such that this holds right summation f i g i equals h to the r okay uh so the usual uh yeah so like noga pointed on the in the usual setting you want a stronger assumption you want that h of x vanishes on the common zeros of f i x in the closure of f p .

Okay that's actual remember ah the so we would work without that and we will see in the next page what the results are so the idea is to get bounds on r and capital d okay uh well as i say uh tell you about the result .

I'll say which ones require the stronger hypothesis of closure zero and which one is required just the finite field analog so i am going to assume the number of polynomials is fixed ok so the first result is by hermann and she proved like capital d is .

Doubly exponential in n and r is some constant depending on d and there you really want you are working in the closure and ah so then brown oil and cola in a breakthrough word were able to get it down to singly exponential and they proved that capital d is just d .

To the k and r is some function of d again in the closure world so green and tau worked with the finite field analog saying we do not need it to vanish on the closure as long as h vanishes only on the common zeros of f i's in the .

In the field itself then you can actually get something very strange you can get r equals one but you lose here you get d to be something which is ackerman and d and p but you get r equals one so the moment you get r equals one you immediately get an ideal membership algorithm .

Like like even if even f1 through fk tell me if h is in the ideal of the fi so you can get an algorithm that runs in time n to the this right because all these these two would give you radical membership algorithms but so sometimes it's just not true with .

Sorry right well okay so the counter example that you're trying to say is something like well what if i say that uh my this is uh x square and my h is x okay so then so the the promise is that .

Whenever this vanishes this vanishes that's true but can you write this as can you find multipliers g i such that h of x is equal to f 1 g 1. so so the the way to get around this is they have .

P here so they they're going to use the fact that x 1 to the p is x 1. so i'm going to choose my multiplier to be x to the p minus 2. okay what .

You mean what this function c is uh what are the dependencies um so these are like uh very efficient these are like d to the k k is fixed so these are like poly d but this this is like like in the previous result it's .

It's worse than a tower of length d and k so so you can have 2 to the 2 to the and this would have maybe so even this is a lower bound so it's worse than this so it's .

It's more like a it's an asymptotic result in the true essence like it's ah and here p also enters the picture so this also has p so the point here is that the r equals one the main point is that they are able to get r equals one .

Which immediately solves the ideal membership problem and right and so we are able to basically get the p out in a linear way and this is tight because of the example you have to get p .

But this is the shape you expect and you have p times c of t and you still have r equals one so you can still solve the radical the ideal membership in time n to the capital d right so this is as function as not not as polynomials like because you said x it says .

Functions on f yeah right as evaluations so mod mod uh x i to the p minus x i's for all yeah so and there are like other applications like this in ah like counting points in rational varieties and holes in the number of .

Points and varieties and so on but i'm not gonna go uh over them right now so i'm gonna jump to another application uh so coding theory ok so just an introductory slide on codes ah so codes you have a what is the code you .

Have an ambient space you take a subset in it that's a code but and the way it's different from any other subset is you have two parameters that you care about so the rate and minimum distance so rate tells you how many of these codewords you have so what is the size of your subset and minimum distance tells you that well .

How far apart or each one of them you want ah you want them to be as far as possible from each other and at the same time you want a lot of them so kind of like contradicting ok and what is list decodability well it asks for the number of code words in uh in a ball of radius r .

And you you want to show that it is bounded like what is the maximum radius you can get to and ah still ensure that the number of these black dots is bounded right that's the list decoding problem so getting in to more specifics ah .

So the read molar code all right so here you ah the space is the space of all n variate functions over f p to the n from f p to the n to f p and ah so your code words are basically lower degree polynomial so you can think of each point as a .

Evaluation of the function so you can think of it as a vector of length p to the n and the distance between two functions is defined as the normalized hamming distance between them so so distance between these two is simply probability that p one of x naught equals p two of x .

Right and you can show by schwarzepal lemma that that the minimum distance is 1 minus d over p so i am assuming d is less than p now and just for the proof outline and presentation .

So the so any two polynomials are at least one minus d over p apart and this is tied so this number is going to be important the minimum distance so we'll see that later okay now that we saw what read mirror code is now what is list decoding of red motor code so there is something called .

Realistic coding radius and what it means is it's the largest r0 such that if i look at any ball of slightly smaller radius then the number of code words is independent of n so so what is the maximum r zero you can have such that .

If i shrink the ball a little bit the number of code words ah loses any dependence on n so let's see what the list decoding radius would be so let's look at a ball of radius 1 minus d over p and see how many code words we have inside that i mean less than equal to 1 minus d over p so here .

You will have exponentially many code words and it's very easy to see what the polynomials look like so take any linear function of your choice and you look at l x minus 1 dot dot l x minus d right and what is the distance of this polynomial from the zero function .

It's it is one minus d over p right and there are p to the n many choices of the linear form so there are p to the n many code words that are sitting on the edge but that doesn't rule out the fact that the list decoding radius can be this because .

Recall that the list decoding radius is the largest r 0 such that if i shrink it then the number of code words becomes independent of n so so what if i ask the number of codewords ah at radius one minus d over p minus epsilon can you show that its independent of n .

Right so in fact people have shown that for a large number of settings of d and p so they have shown that which means that list decoding radius equals the minimum distance so they have showed that at the edge you have exponential in n .

Code words but the moment you come down little bit it's independent of n so we have least equal radius is the minimum distance for for the linear case this was proved by goldrick 11 and go to government for sudan and for the binary setting .

Gopal and cliven zuckerman proved it in 2008 and later gopala extended to the quadratic case for all fields so this paper is heavily dependent on the field being binary like the techniques but gopalan used some fourier analysis .

And he was able to prove it for the quadratic setting for all field and in an earlier work ah with low weight we were able to resolve this for all fixed d and p so for as long as these are fixed you have least decoding radius equals the minimum distance .

And over large fields there are a lot of results but they all seem to ah so they all seem to point to the conclusion that least decoding radius is at least this number right here one minus square root d over p so this is like called the johnson bound .

And all these results say that well if your if the ball is of radius 1 minus square root d over p then i can output all the code words in it efficiently and so on and people did not know what happens beyond this and we are we know that one minus d .

Over p is the maximum you can get to because there are all these codewords lying on the edge so there is this gap from 1 minus square root d over p to 1 minus d over p so so in this result we actually show that we can go all the way to 1 minus d over p again for fixed degrees .

So we show that even for large fields you can list decode up till 1 minus d over p which is the maximum you can do and it's algorithmic as well so so i'm going to give a proof outline for this ah a .

Shortly ok so we saw what happens at ah 1 minus d over p or the minimum distance of d comma p but what happens if i increase the size of the ball so so so what if i want to know how many code .

Words are in a larger radius ball like so fix some e less than d and you look at a ball of radius one minus e over p minus epsilon right then how many code words do you have so you can actually show that you will have at least these many code words .

Sitting in a ball of this radius and again a counter you know i mean a proof for this would be to look at ah polynomials that look like this so so take your first variable x1 you construct this form x1 minus 1 dot x1 minus e .

And then keep another variable sphere and then you construct any you set any degree d minus e polynomials in the remaining variables and ah if you stare at it enough you will see that like the distance of this polynomial .

From the zero function is slightly less than one minus e over b so you have all these polynomials sitting in this radius and how many of them are there well how many degree d minus e polynomials do you have you have these many so this is .

The lower bond you have you and you want to prove something like this as an upper bound so so in fact an upper bound of this shape was known for the binary case in a work by kaufman lovett and porat and in the fixed field general fixed field case in an earlier work of hours and in this .

Result we are able to extend this to able to extend this to general fields right and ah so in this work in the earlier work we actually had two to the ackermann in p comma d comma s so in this way we are able to pull down the p ah to its right shape so now it's p to .

The whatever remains and as we saw the shape of n is the right shape and shape of p is right so an open problem would be to fix the shape of s and d uh over f2 yeah but but over f2 these uh this work .

Already gets uh like an exponential in d bound and amir spiel kaidal their result makes it exponential in e orbits because e is less than d so and they really wanted that for their result they wanted something that only .

Depends on e and not on d because what if i am interested on in for e at equals one or two so i do not want any d dependent but all that is like they are all extremely good bonds compared to what we have right now here at least for the general field case so for f2 all the results are like the .

Bounds are very good like poly and exponential things i mean you see all the time but in the general p even for the fixed we have this problem of bad constants ok and just as an aside us direct corollary is .

That we now have a better idea of the weight distribution of read molar codes basically the number of code words at weight 1 minus e over p minus epsilon is exactly this so you see kind of nice jumps in the number of polynomials so so you start with equal 0 so that's like .

Allowing every like everything so then you have d minus 0. so you have all the polynomials and as you keep increasing e the number of polynomials keeps dropping until you lose dependence on n and that's when you reach 1 minus d over b .

So when you substitute equals d you get like you don't get any more n so it tells you where the number of problem is keep jumping so we look at a proof outline for this okay and we are going to set d less than p for now just for the outline .

So this is the goal you want to show that the number of code words in a ball of radius 1 minus e or p minus epsilon is this number p to the n to the d minus e so the proof at the high level is a two step proof so given a center g .

Now construct a ball of one minus e over p radius so the first step is a weak regularity lemma so so you basically replace the center g by a low complexity center g prime which is more structured so um so you get a set of low complexity .

Proxies g prime for g and low complexity means it's made of few low degree polynomials and essentially it looks like this it's a composition of constantly many low degree polynomials so instead of working with the center g you work with the center g prime that looks like .

This structure thing and and the second step is to say that i am going to go over these steps again in more detail but just high level and the second step is to say that any f any code word that is close to g will be close to g prime as well because g prime acts like a proxy for g .

And anything that is close to g prime is much more easier to analyze and you can show that any such f is really structured so there cannot be too many such f's that are close to g in the first place no that you cannot ensure just by accounting argument there are way too many .

You so g prime is not an approximation to g g prime just looks like g to the class of low degree polynomials okay okay ah let's see if the next slides make it clear otherwise i'll use the board .

Okay so step one expanding step one so uh you have these code words which are low degree polynomials and you have a center g which is an n variate function and the proxy is defined as follows so so there exis these c c of these code words which will act as .

A proxy for g and what that means is so whenever a polynomial p is close to g ah close meaning this whatever radius we are interested in then that polynomial is also close to a g prime .

Which looks like this so but note that this is a bit misleading g and g prime need not be close to each other all this is saying is whenever p is close to g there will be a g prime where p is close to g prime and the advantage is that this p 1 through p c are universal they .

Don't change with they don't change with capital p the only thing that changes with capital p is this composition function but this is fine for us because it is a c variable function so we can we are not unhappy about that ok so that is the first weak regularity .

Lemma so it basically reduces the least decoding problem around the general center to holistic coding problem around a more structured center and ah so i think i'll take this down now and the motivation to reducing it to structure centers is as follows so .

So what if the center was the zero function then you know that ah any f so let us say equals d so we are looking at ah we are looking at any polynomial which is .

Close to ah greater than d over p so let us say we are trying to solve at radius one minus d over p which means that how many polynomials f do we have which are more than d over p fraction close to g if g is the fixed if g is the zero .

Polynomial then f of x is the zero function just by short zero right so which means that f does not depend on x one through x n now if i if i change this to if i change this to g of x 1 through x c .

Which is just a c variable function then if i have this hypothesis then you can show that f does not depend on x c plus 1 to x n again by a simple short zipple kind of argument you can say that f is actually independent of anything outside g which means that the number of f .

Which are close to g is small right it's just you're assuming no no that's the that's uh that's why it's powerful i mean if g is low degree then then you have something stronger right then f is the constant function because f minus g is also low degree .

I'm saying g is an arbitrary function but g is only c variant the only reason this can ever overshoot the legal limit is if f does not have any dependence on the remaining variables f is also a c .

Variate function which automatically puts the number of such f's at p to the d i mean p to the c plus d choose d right the number of c variate degree d polynomials right this is one way of bounding the list decoding size when the .

Center is structured now now we uh we almost have that we we have that its we are counting the number of ah code words around number of these polynomials around the structured center but we do not have x one right it's it's a number of .

Compositions on c plus k variables that's a lot p to the p to the k plus c right and the way you get around that is you somehow do an argument saying that these are all they behave like individual variables so so if if the left hand side is a degree d polynomial you cannot have an arbitrary degree decomposition you can also you .

Will prove that even the composition itself is a degree d polynomial so you do not have so many of them you have the right number you have you have p to the and this is like very very small compared to this because you have n in .

The left one right you can show that even t has to be like a low degree composition so that's how you get away from the doubly exponential bound so how much time do i have do i summarize then i won't go through the so .

I'm gonna tell you the proof outline one more time and i'll skip the higher order for analysis okay so then uh let me tell you about two problems uh with this approach so the first problem is that ah we we .

Can no longer work with degree d polynomials we ah to get this kind of an ah this kind of a conclusion you have to get to a sub code of read molar codes and the sub code is actually we are going to only pick so we will change our definition of code .

We only keep those degree d polynomials which are also of low rank which means that keep only those degree d polynomials that look like for some c so we are going to work with that sub .

Code and ah and the second change is that we we can no longer get one proxy for every g there will be a bunch of proxies like g 1 prime g2 prime gk prime and all you can say is p is close to one of them and then basically you bound the list .

Sizes around around each of the green ones and add it up so these are two main points of departure to make it more precise i'm going to skip a lot of slides ok ah ok so complete proof outline for that is .

As follows so first you reduce list decoding problem ah for the general read molar code to a special kind of code of read molar code namely only the low rank degree so we only keep those degree d polynomials which have this representation so we'll fix will fix a universal c and then say any low degree .

Polynomial that can be written like this for a constant c we will only keep those and that is our new code c prime and ah so we showed we also show that this does not ah this does not mess up our analysis because we show that most of the bad code words lie in this c prime so .

The fact that we threw away a lot of these code words does not affect us because there are any there will be many few there won't be too many of them around the ball so we can work with the structured code which is actually responsible for all the code words .

If you see all the counter examples were very very structured so like x1 minus 1 and x1 minus 2 and we are going to keep those things here and we'll say that the others don't matter and now given any function g and variate function so there is a list of these structured .

Proxies g1 through gc such that whenever f is any f in c prime is close to g it is also close to one of the gi's and finally we show that list size around these structured gi's is easy to bound by generalizing this short zipple kind of .

Argument like ah and then you combine everything uh combine everything together and that's how you have the so yeah that's an outline for the proof ok so take home message would be like like the notions of structure and pseudo randomness have a lot of .

Applications and ah just to recap it says that whenever a function of n variables is biased it implies that it's not really an invariant function but it's a it's almost a c variate function for a constant c but i'm lying a bit i mean it's not c .

Variables but c lower degree polynomials so you can do this kind of dimension reduction as long as you are happy to work with lower degree polynomial instead of linears like linears would be too good to be true but if you relax this in the dimension reduction then you get from n to constant .

And ah and ah this kind of dichotomy theorems i mean they have been used a lot in this work which we saw how to apply them in algebraic geometry and coding theory and there are many other areas where they have already been applied and so open problems .

The first one is basically improve the bounds on d because right now they're ackerman and something uh polynomial or even exponential would be great there are no interesting lower bounds uh not in this algebraic setting i mean there are in the graph theoretic like triangle .

And i mean there are people already shown the tower bonds but here we don't yet have one but this regularization technique you have to avoid that i mean otherwise you will keep ah incurring an acronym loss right and the next .

Ah so computer science open problem would be to extend this result to read solomon code read solomon codes are essentially one variable so degree d but over one variable instead of n variables and can you can you decode up till that one minus d over p radius right now we know how to .

Decode up to johnson radius from gurus from sudan's work but can you beat that can you go all the way to one minus d over p so right now we we showed this only for eight molar codes and uh that would be a very nice open problem .

So yeah that's it any questions or any

RELATED ARTICLES

Most Popular