## How to model and solve Quadratic assignment model - gurobi

### Polynomial curve fitting

I want to implement polynomial curve-fitting using the least-squares technique but with various error functions i.e. not just least squares. Is there some way to do that in MATLAB? (I want to compare the results for different error functions. I also want to use regularisation for which I need to change the error function).
Can you share any resources (MATLAB/C++) that could provide some help on how to implement curve fitting without in-built function? I could only find those using gaussion elimination - is that the same as least squares fitting?
Gaussian elimination is not the same as least-squares fitting. The sense in which it is not the same as least-squares fitting resembles the sense in which gasoline is not the same as driving.
Gaussian elimination is a technique to solve a linear system. Least-squares solves a linear system and does some other things, so it can use Gaussian elimination.
In general, as far as I know, least-squares fitting in the generalized Moore-Penrose sense (see sect. 13.6 here; caution, heavy reading) is the canonical linear way to fit parameters. If you wish to use an unrelated error function, then you will have either (a) to depart from matrix techniques or (b) use less efficient iterative matrix techniques which do not approach the power of Moore-Penrose.
I realize that this is probably not the answer you wanted, but I believe that it is the answer. If you find out differently, let us know.
polynomial curve fitting is the first step towards learning "machine learning". My advise is to try least square first and then understand the probabilistic treatment of curve fitting. You can find this in (Bishop's Book). The summary is, you can assume that target value(t) for an input value (x) comes from a gaussian distribution. So error can be minimized by taking the maximum likelihood of the target value. This looks easy at the begining but the intuitive meaning has many insights. I would recommend you try this using matlab or r.

### Is the Pearson correlation coefficient a suitable objective function for quadratic programming solvers?

Is the Pearson correlation coefficient -- with one vector, x, exogenous and another vector, y, as a choice variable -- a suitable quadratic objective function for quadratic programming solvers like Gurobi?
A quick Google search for "Gurobi objective function" shows that Gurobi has an API to set an objective function that accepts a linear or quadratic expression. That is quite expected because quadratic programming is, by definition, an optimization of a quadratic function, with the math behind the methods specifically designed for this class (like, working directly with the Q coefficient matrix and the c vector rather than the raw function).
I didn't look into details too much, but I can see that Pearson product-moment correlation coefficient appears to be not a quadratic but a rational function. So, if your specific case can't be simplified to that, no.
I cannot say anything about other solvers because each one is an independent product and has to be considered separately.
Since your function appears to be piecewise continuous and infinitely differentiable, you're probably interested in general-purpose gradient methods instead.

### Fixed Point and Proof theory

For any given logic program, proof theory of it uses SLD (Selective Linear Definite) resolution to find the satisfiablity of the query. For the same logic program, we can apply fixed point theorem to find the models.
My question is,
should we consider finding fixed point of logic programs as proof theory or model theory or is it neither?
My guess would be model theory since the fixpoint semantics of a logic program is its model. However, we know that |= coincides with |- for logic programs, so the semantics based on proving (=resolution) coincide with the semantics based on the fixed points (models).
The preceding discussion is valid only for pure logic programs, i.e., no negation, bultins, arithmetics...

### The essence of FRP: functional reactive programming as programming with (discrete) differential equations? [duplicate]

Specification for a Functional Reactive Programming language
I am trying to understand functional reactive programming for a long time (since I have participated in the Reactive Coursera course a year ago) but I still don't understand the essence of it.
Here I am going to describe my current understanding about functional programming vs. functional reactive programming and I would like know if I am on the right path towards understanding the essence of functional reactive programming or not, if not then why not ?
I want to know if it is a good analogy to think about functional reactive programming as programming with differential equations.
In other words, specifying how the system evolves in terms of equations (declaratively).
In functional programming computations are described using static, time-independent equations, in contrast in functional reactive programming everything becomes time dependent. So instead of describing a simple function, one describes a function that depends on time explicitly.
For example, in traditional functional programming one is programming using pure functions without side effects. Just like mathematical functions (maps from one set to another set).
For example f(x)=x^2.
However, in functional reactive programming, to my understanding, and I am not sure if I understand it correctly so please correct me if I am not, one describes computations in terms of time dependent discrete differential equations.
For example, if I want to describe the user interacting with a ball on the screen, which can move along one dimensions (x) then I write the following equations:
x(t)/dt=v(t)
v(t)/dt=a(t)
a(t)=F(t)/m
where F(t) is the force exerted by the user onto the ball.
If I understand correctly, the essence of functional reactive programming is to go from static functions to time dependent functions and express the computations/algorithms in terms of (discrete) differential equations.
Is this understanding of mine correct ? Is this really the essence of functional reactive programming, or is there more to it ?
See my answer here (containing the two essential founding properties of FRP) and follow the links you find.

### Y-Combinator definiton

I am trying to understand the fixed-point combinator. I think it is used by some languages to implement recursion. The main problem is that I couldn't get the next definition:
That is an implementation of the fixed-point combinator in lambda calculus (called a Y-combinator). It satisfies the equation
There isn't too much to "get" about the implementation other than it satisfies the above.
The wikipedia entry here shows how the Y-combinator satisfies the above equation