How to print out all constraints in Gurobi C++? - gurobi

I am using Gurobi now but the model turns out to be infeasible, so I am trying to print out all the constraints to see if I made mistakes. I know a few functions on how to print out the names of each constraint, but just couldn't find the final solution to print the constraints themselves(the mathematical expressions).
GRBConstr *c=0;
c = model.getConstrs();
for(int i=0;i<model.get(GRB_IntAttr_NumConstrs);++i){
cout << c[i].get(GRB_StringAttr_ConstrName) << endl;
}

To debug a model, the best option is to write the model file in LP format. In your example, add the code:
model.update();
model.write("debug.lp");
Then browse the file debug.lp in your favorite text editor.

Related

Make NERD Commenter prepend before spaces [duplicate]

This question already has an answer here:
How do I get NERDCommenter to add comments in a particular column?
2 answers
Is there a way to put the comment character at the very beginning of the line, before any spaces, e.g., instead of having comments inserted like this
// if (b != 0)
// if (a > b)
// cout << a << endl;
it should be like this
// if (b != 0)
// if (a > b)
// cout << a << endl;
The only things I see in the doc are NERDSpaceDelims and NERDRemoveExtraSpaces, which I've tried toggling and don't seem to help this issue.
If you would like to use vanilla vim, Here we go
Select the block using C-v, followed by any navigation key, in your case j. Then I. This will take you insert mode.
Now we can use any key to insert, for your particular example, \\. Finalize it by Esc.
We should have commented block now.

how to get the ascii value for integer greater than 127 (extended Ascii character) in objective-c

I need to send the Ascii value for an integer greater that 127 to an external device. I'm using [NSString stringWithFormat:#"c", myInteger]. However, this return an empty string probably because myInteger is greater than 127, e.g 155.
How can I get the correct Extended Ascii value?
You need to use %C, not %c. Then it will work with any 16-bit value.
I don't know if this is relevant, but I have been playing around with characters for a c++ text-based game I am making. Keep in mind that I am working on CMD.
I used this to get all the possible characters, to know what I have to work with:
for (int i = 0; i < 256;i++) {
cout << i << " " << char(i) << endl;
}
You can use int(character) to do the opposite.
Like "user1152964" said, this must be the Extended ASCII, because after 255, all the characters just repeat themselves.
In my opinion, your best choice would be to send the integer value of the character you want to send to the external device and then make sure that the character is presented in the right format.
This might help you with Objective C:
Objective-C int to char

How can one make Vim tabularize automatically break for given column width?

Using tabularize in vim is really sweet, but one thing that I don't like is when I have one column in a set that is particularly long that messes all the others up. Generally, I like to have my text width be 80 characters or less. Otherwise, when you split vertically, you get a mess of readability. Consider the following:
mlog->_ofile << "INS=" << string((char *) ins_asm) << delimiter <<
" in Img: " << (img_name!=0 ? string( (char*) img_name) : "INVALID") <<
delimiter << " at IP=" << setbase(16) << insPointer << delimiter <<
" Time: " << setbase(10) << time << delimiter;
you would generally align this with a simple visual selection then :Tab /<<. But the long column starting with (img_name!=... messes everything up: it forces all the other columns to be really long. It would be cool if tabularize had a varible that you could set so that it would automatically find the optimal spacing and arrangement of tokens to make for the shortest legal statement, but this is more arduous to do for each language, its best to be able to quickly split by text width.
:set tw=80 doesn't do the job either just to note.
How can one both align this statement quickly with tabularize, and also have it insert newlines where appropriate?
The results not pretty :) but try
:Tab /<<
Select range
type gq
It looks like the following.
There's a non visual method
:%s/\(.\{78}\)/\1\r/g
But it looks worse

MiniZinc, Gecode remove solution separators

I have a minizinc model for which I want to find all solutions (I use gecode) then print statistics, this is easy:
mzn-gecode -as foo.mzn
but this model will generate thousands of solutions and a separator is printed for each solution:
----------
----------
----------
----------
==========
I need to remove these separators and only print the statistics. Is there a way?
==Update==
I was able to solve this by changing the Gecode source in
gecode/flatzinc/flatzinc.cpp
where i removed
out << "----------" << std::endl;
Maybe there is a better solution, but this worked great for me.
These separators are shown because you don't have any output statement for the variables.
E.g.
output [
show(x) ++ "\n" ++ show(y)
];
--soln-sep <s>, --soln-separator <s>, --solution-separator <s>
Specify the string used to separate solutions.
The default is to use the FlatZinc solution separator,
"----------".
adding --soln-sep <s> overwrites standard separator

Unicode in PDF

My program generates relatively simple PDF documents on request, but I'm having trouble with unicode characters, like kanji or odd math symbols. To write a normal string in PDF, you place it in brackets:
(something)
There is also the option to escape a character with octal codes:
(\527)
but this only goes up to 512 characters. How do you encode or escape higher characters? I've seen references to byte streams and hex-encoded strings, but none of the references I've read seem to be willing to tell me how to actually do it.
Edit: Alternatively, point me to a good Java PDF library that will do the job for me. The one I'm currently using is a version of gnujpdf (which I've fixed several bugs in, since the original author appears to have gone AWOL), that allows you to program against an AWT Graphics interface, and ideally any replacement should do the same.
The alternatives seem to be either HTML -> PDF, or a programmatic model based on paragraphs and boxes that feels very much like HTML. iText is an example of the latter. This would mean rewriting my existing code, and I'm not convinced they'd give me the same flexibility in laying out.
Edit 2: I didn't realise before, but the iText library has a Graphics2D API and seems to handle unicode perfectly, so that's what I'll be using. Though it isn't an answer to the question as asked, it solves the problem for me.
Edit 3: iText is working nicely for me. I guess the lesson is, when faced with something that seems pointlessly difficult, look for somebody who knows more about it than you.
The simple answer is that there's no simple answer. If you take a look at the PDF specification, you'll see an entire chapter — and a long one at that — devoted to the mechanisms of text display. I implemented all of the PDF support for my company, and handling text was by far the most complex part of exercise. The solution you discovered — use a 3rd party library to do the work for you — is really the best choice, unless you have very specific, special-purpose requirements for your PDF files.
In the PDF reference in chapter 3, this is what they say about Unicode:
Text strings are encoded in
either PDFDocEncoding or Unicode character encoding. PDFDocEncoding is a
superset of the ISO Latin 1 encoding and is documented in Appendix D. Unicode
is described in the Unicode Standard by the Unicode Consortium (see the Bibliography).
For text strings encoded in Unicode, the first two bytes must be 254 followed by
255. These two bytes represent the Unicode byte order marker, U+FEFF, indicating
that the string is encoded in the UTF-16BE (big-endian) encoding scheme specified
in the Unicode standard. (This mechanism precludes beginning a string using
PDFDocEncoding with the two characters thorn ydieresis, which is unlikely to
be a meaningful beginning of a word or phrase).
Algoman's answer is wrong in many things. You can make a PDF documents with unicode in it' and it's not a rocket science, though it needs some work.
Yes he is right, to use more than 255 characters in one font you have to create a composite font (CIDFont) pdf object.
Then you just mention the actual TrueType font you want to use as a DescendatFont entry of CIDFont.
The trick is that after that you have to use glyph indices of a font instead of character codes. To get this indices map you have to parse cmap section of a font - get contents of the font with GetFontData function and take hands on TTF specification.
And that's it! I've just did it and now I have a unicode pdf!
Sample Code for parsing cmap section is here: https://support.microsoft.com/en-us/kb/241020
And yes, don't forget /ToUnicode entry as #user2373071 pointed out or user will not be able to search your PDF or copy text from it.
See Appendix D (page 995) of the PDF specification. There is a limited number of fonts and character sets pre-defined in a PDF consumer application. To display other characters you need to embed a font that contains them. It is also preferable to embed only a subset of the font, including only required characters, in order to reduce file size. I am also working on displaying Unicode characters in PDF and it is a major hassle.
Check out PDFBox or iText.
http://www.adobe.com/devnet/pdf/pdf_reference.html
I have worked several days on this subject now and what I have learned is that unicode is (as good as) impossible in pdf. Using 2-byte characters the way plinth described only works with CID-Fonts.
seemingly, CID-Fonts are a pdf-internal construct and they are not really fonts in that sense - they seem to be more like graphics-subroutines, that can be invoked by addressing them (with 16-bit addresses).
So to use unicode in pdf directly
you would have to convert normal fonts to CID-Fonts, which is probably extremely hard - you'd have to generate the graphics routines from the original font(?), extract character metrics etc.
you cannot use CID-Fonts like normal fonts - you cannot load or scale them the way you load and scale normal fonts
also, 2-byte characters don't even cover the full Unicode space
IMHO, these points make it absolutely unfeasible to use unicode directly.
What I am doing instead now is using the characters indirectly in the following way:
For every font, I generate a codepage (and a lookup-table for fast lookups) - in c++ this would be something like
std::map<std::string, std::vector<wchar_t> > Codepage;
std::map<std::string, std::map<wchar_t, int> > LookupTable;
then, whenever I want to put some unicode-string on a page, I iterate its characters, look them up in the lookup-table and - if they are new, I add them to the code-page like this:
for(std::wstring::const_iterator i = str.begin(); i != str.end(); i++)
{
if(LookupTable[fontname].find(*i) == LookupTable[fontname].end())
{
LookupTable[fontname][*i] = Codepage[fontname].size();
Codepage[fontname].push_back(*i);
}
}
then, I generate a new string, where the characters from the original string are replaced by their positions in the codepage like this:
static std::string hex = "0123456789ABCDEF";
std::string result = "<";
for(std::wstring::const_iterator i = str.begin(); i != str.end(); i++)
{
int id = LookupTable[fontname][*i] + 1;
result += hex[(id & 0x00F0) >> 4];
result += hex[(id & 0x000F)];
}
result += ">";
for example, "H€llo World!" might become <01020303040506040703080905>
and now you can just put that string into the pdf and have it printed, using the Tj operator as usual...
but you now have a problem: the pdf doesn't know that you mean "H" by a 01. To solve this problem, you also have to include the codepage in the pdf file. This is done by adding an /Encoding to the Font object and setting its Differences
For the "H€llo World!" example, this Font-Object would work:
5 0 obj
<<
/F1
<<
/Type /Font
/Subtype /Type1
/BaseFont /Times-Roman
/Encoding
<<
/Type /Encoding
/Differences [ 1 /H /Euro /l /o /space /W /r /d /exclam ]
>>
>>
>>
endobj
I generate it with this code:
ObjectOffsets.push_back(stream->tellp()); // xrefs entry
(*stream) << ObjectCounter++ << " 0 obj \n<<\n";
int fontid = 1;
for(std::list<std::string>::iterator i = Fonts.begin(); i != Fonts.end(); i++)
{
(*stream) << " /F" << fontid++ << " << /Type /Font /Subtype /Type1 /BaseFont /" << *i;
(*stream) << " /Encoding << /Type /Encoding /Differences [ 1 \n";
for(std::vector<wchar_t>::iterator j = Codepage[*i].begin(); j != Codepage[*i].end(); j++)
(*stream) << " /" << GlyphName(*j) << "\n";
(*stream) << " ] >>";
(*stream) << " >> \n";
}
(*stream) << ">>\n";
(*stream) << "endobj \n\n";
Notice that I use a global font-register - I use the same font names /F1, /F2,... throughout the whole pdf document. The same font-register object is referenced in the /Resources Entry of all pages. If you do this differently (e.g. you use one font-register per page) - you might have to adapt the code to your situation...
So how do you find the names of the glyphs (/Euro for "€", /exclam for "!" etc.)? In the above code, this is done by simply calling "GlyphName(*j)". I have generated this method with a BASH-Script from the list found at
http://www.jdawiseman.com/papers/trivia/character-entities.html
and it looks like this
const std::string GlyphName(wchar_t UnicodeCodepoint)
{
switch(UnicodeCodepoint)
{
case 0x00A0: return "nonbreakingspace";
case 0x00A1: return "exclamdown";
case 0x00A2: return "cent";
...
}
}
A major problem I have left open is that this only works as long as you use at most 254 different characters from the same font. To use more than 254 different characters, you would have to create multiple codepages for the same font.
Inside the pdf, different codepages are represented by different fonts, so to switch between codepages, you would have to switch fonts, which could theoretically blow your pdf up quite a bit, but I for one, can live with that...
As dredkin pointed out, you have to use the glyph indices instead of the Unicode character value in the page content stream. This is sufficient to display Unicode text in PDF, but the Unicode text would not be searchable. To make the text searchable or have copy/paste work on it, you will also need to include a /ToUnicode stream. This stream should translate each glyph in the document to the actual Unicode character.
I'm not a PDF expert, and (as Ferruccio said) the PDF specs at Adobe should tell you everything, but a thought popped up in my mind:
Are you sure you are using a font that supports all the characters you need?
In our application, we create PDF from HTML pages (with a third party library), and we had this problem with cyrillic characters...

Resources