What are the real-life areas of implementing Simplex Method of Linear programming? [closed] - simplex

I would like to find some more or less detailed descriptions of real-life problems that can be successfully solved by Simplex method.
Could anyone give me some references to the materials where I can find the descriptions of real-life tasks where Simplex method can be used successfully?
I don't need any theoretical materials as I have finished Operations research post-graduate study in my university so I definitely know some theory about this method.
In theory Simplex can be used in a lot of areas.
I would like to know in what tasks Simplex is currently used in the world.

Well, Simplex algorithm can be used a lot of areas, like you said.
Here is an example of an energy system model, which optimizes over costs of the energy system, which needs to satisfy the given demand.
In theory if you apply this model to energy generation of a country(with real input file from the country), you may find the cheapest option to satisfy the demand with generating power from various types of power plants.
https://github.com/tum-ens/urbs

As a CEO and founder of a company which basically deal with operations research problems, I can tell you that linear programming an amazing tool to solve industrial and enterprise problems.
You can find some detailed examples in the book.
"Linear Programming and Network Flows" from Mokhtar S. Bazaraa, John J. Jarvis, Hanif D. Sherali
All these examples are used in industries (sometimes with some changes) and are crucial to reduce production costs and improve the efficiency of the company.

Related

Human Brain Emulation? [closed]

Is there any open source project that is trying to implement and emulate the human brain and feelings in a computer software?
I believe in the future artificial intelligence technology and I always wanted to contribute to this technology and learn about it.
Thank you.
We don't know enough about how the brain works to attempt to do what you're saying in a principled way. (I.e., anything of the sort is "guessing wildly".) So this isn't really a software question--if we had any idea of what to write, perhaps it would be, but right now we don't.
However, you may be interested in the Blue Brain Project for a more biological approach, or in any of a number of machine learning projects like the DARPA Autonomous Vehicle Grand Challenge. A less useful but more conversational approach might be found in ALICE, but I wouldn't recommend that for anything useful.
Having used a brain for over 50 years, it's the last thing I'd choose to model an AI on. Brains are notoriously unreliable and arbitrary, and have hidden biases that could take a shrink years to sort out.
Jeff Hawkins the author of "On Intelligence" has a company called Numenta. He has a theory on how brains work and Numenta has a software product that models it. I downloaded and played with it a while ago, it seems to be pretty good at image recognition. I am not entirely sure what the licensing is though, I believe it is free for academic purposes. Also, the website appears to be down at this time.
http://en.wikipedia.org/wiki/Numenta
http://www.numenta.com/
Most of the AI lectures I took in school were by professors who had been chasing the dream of "strong AI" for years, and had finally realized that if they could barely understand how a human brain and mind function (and the theories behind these functions sometimes change almost daily), how could they ever hope to simulate it artificially? Most of them were resigned to AI in niches where the problem is more clearly defined: pathfinding, applications of SAT-solving, image processing, chess-winning, conversation, etc... but they'd given up on the true, general-purpose "thinking machine".
My advice would be to look into a specific problem are that you are interested in (such as pathfinding; applications of SAT solvers, such as diagnosis systems; etc...) and see what AI approaches have been taken to solve them. Maybe the problem you are interested in doesn't have much in terms of AI solutions. In that case, you could get started on a new one! ;)
...But you will probably have to narrow it down to a specific class of problem if you don't want to be overwhelmed - at least at first.
The field you're looking for is Machine Learning. Specifically evolutionary algorithms like Genetic Algorithms or Genetic Programming. One algorithm that I know that specifically set out to mimic the human brain is Hierarchical Temporal Memory which I read about here. But this is a very difficult problem and we are still YEARS away from mimicing the human brain in any meaningful way.
There are algorithms which model the human brain. They're called Artificial Neural Networks (ANN). They basically model the synapses and attempt to model the way in which our synapses can accept signals and, if the combined signal input is strong enough, fire their own signals along dendrites to other synapses.
The thing is, building ANNs as a method of attempting to simulate the real thing, is a lot like using a nuke to simulate the sun; Sure, it'll give you some valuable data, but, in terms of its ability to approximate that which it's modelling, it falls WAYYY short.
I'm not 100% positive on the relative scales here, but to give a decent idea, consider the following (this is definitely going to be off by a few orders of magnitude... but it's close enough to get an idea of why ANNs aren't running the world for us):
If you took every single computer on the planet and had them using every single available resource to create the largest ANNs they could, and then connected all those different ANNs to each other (thus creating an even larger ANN) you MIGHT start to get close to the number of connections present in the human brain.
You could have a look at Cyc:
Cyc is an artificial intelligence
project that attempts to assemble a
comprehensive ontology and knowledge
base of everyday common sense
knowledge, with the goal of enabling
AI applications to perform human-like
reasoning. The project was started in
1984 by Douglas Lenat at MCC and is
developed by company Cycorp. Parts of
the project are released as OpenCyc,
which provides an API, RDF endpoint,
and data dump under an open source
license.
Not precisely a brain, but an important component of an articifial intelligence.
There is a field of computer science known as Organic Computing http://en.wikipedia.org/wiki/Organic_computing Some of the goals of this effort are to have the following.
self-organization
self-configuration
(auto-configuration)
self-optimisation (automated
optimization)
self-healing
self-protection (automated computer
security)
self-explaining
context-awareness
The closest possible thing that I know to this would be the Watchmaker framework. While not related to the human brain, it does seem to strive towards an AI-type framework.
http://watchmaker.uncommons.org/
The Watchmaker Framework is an
extensible, high-performance,
object-oriented framework for
implementing platform-independent
evolutionary/genetic algorithms in
Java

What are the typical use cases of Genetic Programming?

Today I read this blog entry by Roger Alsing about how to paint a replica of the Mona Lisa using only 50 semi transparent polygons.
I'm fascinated with the results for that particular case, so I was wondering (and this is my question): how does genetic programming work and what other problems could be solved by genetic programming?
There is some debate as to whether Roger's Mona Lisa program is Genetic Programming at all. It seems to be closer to a (1 + 1) Evolution Strategy. Both techniques are examples of the broader field of Evolutionary Computation, which also includes Genetic Algorithms.
Genetic Programming (GP) is the process of evolving computer programs (usually in the form of trees - often Lisp programs). If you are asking specifically about GP, John Koza is widely regarded as the leading expert. His website includes lots of links to more information. GP is typically very computationally intensive (for non-trivial problems it often involves a large grid of machines).
If you are asking more generally, evolutionary algorithms (EAs) are typically used to provide good approximate solutions to problems that cannot be solved easily using other techniques (such as NP-hard problems). Many optimisation problems fall into this category. It may be too computationally-intensive to find an exact solution but sometimes a near-optimal solution is sufficient. In these situations evolutionary techniques can be effective. Due to their random nature, evolutionary algorithms are never guaranteed to find an optimal solution for any problem, but they will often find a good solution if one exists.
Evolutionary algorithms can also be used to tackle problems that humans don't really know how to solve. An EA, free of any human preconceptions or biases, can generate surprising solutions that are comparable to, or better than, the best human-generated efforts. It is merely necessary that we can recognise a good solution if it were presented to us, even if we don't know how to create a good solution. In other words, we need to be able to formulate an effective fitness function.
Some Examples
Travelling Salesman
Sudoku
EDIT: The freely-available book, A Field Guide to Genetic Programming, contains examples of where GP has produced human-competitive results.
Interestingly enough, the company behind the dynamic character animation used in games like Grand Theft Auto IV and the latest Star Wars game (The Force Unleashed) used genetic programming to develop movement algorithms. The company's website is here and the videos are very impressive:
http://www.naturalmotion.com/euphoria.htm
I believe they simulated the nervous system of the character, then randomised the connections to some extent. They then combined the 'genes' of the models that walked furthest to create more and more able 'children' in successive generations. Really fascinating simulation work.
I've also seen genetic algorithms used in path finding automata, with food-seeking ants being the classic example.
Genetic algorithms can be used to solve most any optimization problem. However, in a lot of cases, there are better, more direct methods to solve them. It is in the class of meta-programming algorithms, which means that it is able to adapt to pretty much anything you can throw at it, given that you can generate a method of encoding a potential solution, combining/mutating solutions, and deciding which solutions are better than others. GA has an advantage over other meta-programming algorithms in that it can handle local maxima better than a pure hill-climbing algorithm, like simulated annealing.
http://en.wikipedia.org/wiki/Genetic_algorithm#Problem_domains
I used genetic programming in my thesis to simulate evolution of species based on terrain, but that is of course the A-life application of genetic algorithms.
The problems GA are good at are hill-climbing problems. Problem is that normally it's easier to solve most of these problems by hand, unless the factors that define the problem are unknown, in other words you can't achieve that knowledge somehow else, say things related with societies and communities, or in situations where you have a good algorithm but you need to fine tune the parameters, here GA are very useful.
A situation of fine tuning I've done was to fine tune several Othello AI players based on the same algorithms, giving each different play styles, thus making each opponent unique and with its own quirks, then I had them compete to cull out the top 16 AI's that I used in my game. The advantage was they were all very good players of more or less equal skill, so it was interesting for the human opponent because they couldn't guess the AI as easily.
You should ask yourself : "Can I (a priori) define a function to determine how good a particular solution is relative to other solutions?"
In the mona lisa example, you can easily determine if the new painting looks more like the source image than the previous painting, so Genetic Programming can be "easily" applied.
I have some projects using Genetic Algorithms. GA are ideal for optimization problems, when you cannot develop a fully sequential, exact algorithm do solve a problem. For example: what's the best combination of a car characteristcs to make it faster and at the same time more economic?
At the moment I'm developing a simple GA to elaborate playlists. My GA has to find the better combinations of albums/songs that are similar (this similarity will be "calculated" with the help of last.fm) and suggests playlists for me.
There's an emerging field in robotics called Evolutionary Robotics (w:Evolutionary Robotics), which uses genetic algorithms (GA) heavily.
See w:Genetic Algorithm:
Simple generational genetic algorithm pseudocode
Choose initial population
Evaluate the fitness of each individual in the population
Repeat until termination: (time limit or sufficient fitness achieved)
Select best-ranking individuals to reproduce
Breed new generation through crossover and/or mutation (genetic
operations) and give birth to
offspring
Evaluate the individual fitnesses of the offspring
Replace worst ranked part of population with offspring
The key is the reproduction part, which could happen sexually or asexually, using genetic operators Crossover and Mutation.

Digital Circuit understanding

In my quest for getting some basics down before I start going into programming I am looking for essential knowledge about how the computer works down at the core level.
I have a theory that actually understanding what for instance a stackoverflow let alone a stack is, instead of my sporadic knowledge about computer systems, will help me longer term.
Is there any books or sites that take you through how processors are structured and give a holistic overview and that somehow relates to good to know about digital logic?
Am i making sense?
Yes, you should read some topics of
John L. Hennessy & David A. Patterson, "Computer Architecture: A quantitative Approach"
It has microprocessors' history and theory , (starting with RISC archs - MIPS), pipelining, memory, storage, etc.
David Patterson is a Professor of Computer of Computer Science on EECS Department - U. Berkeley. http://www.eecs.berkeley.edu/~pattrsn/
Hope it helps, here's the link
Tanenbaum's Structured Computer Organization is a good book about how computers work. You might find it hard to get through the book, but that's mostly due to the subject, not the author.
However, I'm not sure I would recommend taking this approach. Understanding how the computer works can certainly be useful, but if you don't really have any programming knowledge, you can't really put your knowledge to good use - and you probably don't need that knowledge yet anyway. You would be better off learning about topics like object-oriented programming and data structures to learn about program design, because unless you're looking at doing embedded programming on very limited systems, you'll find those skills far more useful than knowledge of a computer's inner workings.
In my opinion, 20 years ago it was possible to understand the whole spectrum from BASIC all the way through operating system, hardware, down to the transistor or even quantum level. I don't know that it's possible for one person to understand that whole spectrum with today's technology. (Years ago, everyone serviced their own car. Today it's too hard.)
Some of the "layers" that you might be interested in:
http://en.wikipedia.org/wiki/Boolean_logic (this will be helpful for programming)
http://en.wikipedia.org/wiki/Flip-flop_%28electronics%29
http://en.wikipedia.org/wiki/Finite-state_machine
http://en.wikipedia.org/wiki/Static_random_access_memory
http://en.wikipedia.org/wiki/Bus_%28computing%29
http://en.wikipedia.org/wiki/Microprocessor
http://en.wikipedia.org/wiki/Computer_architecture
It's pretty simple really - the cpu loads instructions and executes them, most of those instructions revolve around loading values into registers or memory locations, and then manipulating those values. Certain memory ranges are set aside for communicating with the peripherals that are attached to the machine, such as the screen or hard drive.
Back in the days of Apple ][ and Commodore 64 you could put a value directly in to a memory location and that would directly change a pixel on the screen - those days are long gone, it is abstracted away from you (the programmer) by several layers of code, such as drivers and the operating system.
You can learn about this sort of stuff, or assembly language (which i am a huge fan of), or AND/NAND gates at the hardware level, but knowing this sort of stuff is not going to help you code up a web application in ASP.NET MVC, or write a quick and dirty Python or Powershell script.
There are lots of resources out there sprinkled around the net that will give you insight into how the CPU and the rest of the hardware works, but if you want to get down and dirty i honestly think you should buy one of those older machines off eBay or somewhere, and learn its particular flavour of assembly language (i understand there are also a lot of programmable PIC controllers out there that might also be good to learn on). Picking up an older machine is going to eliminate the software abstractions and make things way easier to learn. You learn way better when you get instant gratification, like making sprites move around a screen or generating sounds by directly toggling the speaker (or using a PIC controller to control a small robot). With those older machines, the schematics for an Apple ][ motherboard fit on to a roughly A2 size sheet of paper that was folded into the back of one of the Apple manuals - i would hate to imagine what they look like these days.
While I agree with the previous answers insofar as it is incredibly difficult to understand the entire process, we can at least break it down into categories, from lowest (closest to electrons) to highest (closest to what you actually see).
Lowest
Solid State Device Physics (How transistors work physically)
Circuit Theory (How transistors are combined to create logic gates)
Digital Logic (How logic gates are put together to create digital functions or digital structures i.e. multiplexers, full adders, etc.)
Hardware Organization (How the data path is laid out in the CPU, the components of a Von Neuman machine -> memory, processor, Arithmetic Logic Unit, fetch/decode/execute)
Microinstructions (Bit level programming)
Assembly (Programming with words, but directly specifying registers and takes forever to program even simple things)
Interpreted/Compiled Languages (Programming languages that get compiled or interpreted to assembly; the operating system may be in one of these)
Operating System (Process scheduling, hardware interfaces, abstracts lower levels)
Higher level languages (these kind of appear twice; it depends on the language. Java is done at a very high level, but C goes straight to assembly, and the C compiler is probably written in C)
User Interfaces/Applications/Gui (Last step, making it look pretty)
You can find out a lot about each of these. I'm only somewhat expert in the digital logic side of things. If you want a thorough tutorial on digital logic from the ground up, go to the electrical engineering menu of my website:
affablyevil.wordpress.com
I'm teaching the class, and adding online lessons as I go.

Should FPGA design be integrated into a Computer Science curriculum? [closed]

If computer science is about algorithm development and therefore not limited to the imaginations of Processor vendors, but to the realm of all that is practically computable. Then shouldn't a FPGA, which is almost ideally suited for studying cellular automata, be considered a valid platform upon which to study computer science. One particular area of interest, where I feel current curriculums are weak is parallelism and it's integration into programming languages. I think compiler design could benefit from a curriculum that let students deal with the explicit parallelism of FPGAs.
As a CS student, I would LOVE an FPGA course. However, everyone is set in their ways and do not want to modify the curriculum. Its pretty heavy in theory and they think that microcontrollers and FPGAs require too much knowledge of electricity, etc to be of use to a CS student.
Because of this, I'm taking an electrical engineering minor.
I honestly think that it would be useful, but I realize that this is a hard question to answer. The question really isn't whether or not a FPGA course would be valuable (it clearly would), but would it be valuable enough to drop some other course from the curriculum and replace it with this? My suspicion is that most curriculums would not be able to free up enough time to cover it as anything other than an afterthought.
Offer it. Recomend it. Don't require it.
FPGAs are way cool. I have two questions:
What are the ideas of enduring value, that students will still work with 20 years after graduation?
What are you going to eliminate to make room for an FPGA course?
"Education is what is left when knowledge is gone."
As a recent graduate of Computer Engineering and having taken multiple embedded systems courses I feel that it would be extremely useful. It would be helpful in broadening the horizons of standard programming as well as help CS student with the most important aspect of embedded systems development which is efficiency. Managing memory is crucial and those aspects gained from an FPGA based course can carry over to desktop application development. I did not have to wait years for code to compile but "Place and Route" still isn't my favorite phrase haha. Dropping a course is hard for me to say because I am not a CS but CpE and do not know the exact curriculum. However, I am working on desktop applications at the moment and some skills I have gained in my FPGA courses have effected my work. There's my two cents. Enjoy
As a recent Computer Science graduate, I'd say FGPA's are more in the realm of Computer or Electrical Engineering. True, CS is about algorithms, but it is also about the theory of computing, data structures, artificial intelligence, etc., etc., etc. I think FGPA's are just too specific to be a required component. The concurrent programming class I took was at a much higher level, but I believe it gave a decent introduction to parallelism.
As it was, there was a bunch of upper year classes that I wished I could have taken but didn't have room for: quantum computing, compiler construction, real-time systems, etc. All of those would also be good candidates for inclusion into the core curriculum.
Yes, FPGA design should be integrated into a CS curriculum in some form. At least as a lab in digital design or parallel computing class. Modern FPGAs are no longer a bunch of configurable logic gates. They are system on chip (SoC) with multi-core processors and rich set of peripherals.
I see more and more engineers with CS degree and little hardware experience do embedded design on FPGA. To exemplify my point, look at the discussions in Embedded Solution section on Xilinx forum.
Good lord no. I did an FPGA course in my final year, and it meant that I had to sit around for hours and hours while my code compiled. The work involved for a student to get simple code onto a board is horrendous. To this day, the words "place and route" send a shiver up my spine.

Hardware knowledge in computer science?

How much hardware understanding does one need to fully comprehend "Operating System" and "Computer Architecture" courses one takes as a computer science student?
Two thoughts:
First, everything is going parallel. Multi-threading is one thing, multi-core is another. There are oodles of issues around caching, memory architecture, resource allocation, etc. Many of these are 'handled'' for you but the more you know about the metal the better.
Second, number representations in hardware. This as old as computer science itself, but it still trips everyone up. Not sure who said this, but it's prefect: "Mapping an infinity of numberrs onto a finite number of bits involves approximations." Understanding this and numerical analysis in general will save your bacon time and again. Serialization and endian-ness, etc.
Besides, it's fun!
At that level, the more you know the better, but the bare necessities are boolean logic design for computer architecture. Understand how do you design registers, adders, multiplexers, flip flops, etc. from basic logic units (and, or, clocks). You can probably understand operating systems starting from basic understanding of ASM, memory mapped IO, and interrupts.
EDIT: I'm not certain what you mean by "hardware", do you consider logic design to be hardware? Or were you talking about transistors? I suppose it wouldn't hurt to understand the basics of semiconductors, but architecture is abstracted above the real hardware level. I would also say that operating systems are abstracted above the architecture.
At the very basic level, you should know about Von Neumann architecture and how it maps onto real-life computers. Above that, the more the better. And not just the OS - in garbage collected & VM languages, how the heap, stack and instructions work and are executed, so you know what will perform bad and how to improve it to get the best out of the architecture.
"Computer science is no more about computers than astronomy is about telescopes."
It helps when you are trying to optimize for the hardware you are targeting. Take a hard drive for example, it helps to write software that takes advantage of locality to minimize seek time. If you just treat a hard drive as 'it works', and stick files and data all over the place, you will run into severe fragmentation issues and result in lower performance.
A lot of this is taken into consideration when designing an operating system since you are trying to maximize performance. So in short, learning something about it can help, and certainly can't hurt in any way.
A good way to determine a baseline knowledge set for hardware knowledge generally needed for Comp Sci studies is to visit the curriculum websites of a wide range of prestigious universities. For me, I'd check the Comp Sci curriculum at MIT, Stanford, University of Illinois at Urbana/Champaign (UIUC), Georgia Tech, etc. Then I'd get an average understanding from that.
Furthermore, you could also personally phone up guidance counselors at Universities to which you are either attending or applying in order to get a personalized view of your needs. They would be available to guide you based on your desires. Professors even more. They are surprisingly accessible and very willing to give feedback on things like this.
Recently, I looked at grabbing my master's degree. As an alum of UIUC, I emailed a few old professors there and told them of my interest. I asked them several questions geared at understanding gradschool and their perspective. They shared and most invited me to call and chat.
Personally, I'd agree with #CookieOfFortune. The more you know about how a computer works internally, the more you can use that to your advantage while writing software. That said, it's not as if you really need to understand the physics of electronics to a high degree. It's interesting, sure, but your focus should be on circuitry, logic, etc. Much of this should be presented in a good Operating Systems course or at least provide you with springboards to learn more on your own.

Resources