Is this a reasonable “Application entry point”? - definition

I have recently come across a situation where code is dynamically loading some libraries, wiring them up, then calling what is termed the "application entry point" (one of the libraries must implement IApplication.Run()).
Is this a valid "Appliation entry point"?
I would always have considered the application entry point to be before the loading of the libraries and found the IApplication.Run() being called after a considerable amount of work slightly misleading.

The terms application and system are terms that are so widely and diversely used that you need to agree what they mean upfront with your conversation partner. E.g. sometimes an application is something with a UI, and a system is 'UI-less'. In general it's just a case of you say potato, I say potato.
As for the example you use: that's just what a runtime (e.g. .NET or java) does: loading a set of libraries and calling the application entry point, i.e. the "main" method.
So in your case, the code loading the libraries is doing just the same, and probably calling a method on an interface, you could then consider the loading code to be the runtime for that application. It's just a matter of perspective.

The term "application" can mean whatever you want it to mean. "Application" merely means a collection of resources (libraries, code, images, etc) that work together to help you solve a problem.
So to answer your question, yes, it's a valid use of the term 'application'.

Application on its own means actually nothing. It is often used by people to talk about computer programs that provide some value to the user. A more correct term is application software and this has the following definition:
Application software is a subclass of
computer software that employs the
capabilities of a computer directly
and thoroughly to a task that the user
wishes to perform. This should be
contrasted with system software which
is involved in integrating a
computer's various capabilities, but
typically does not directly apply them
in the performance of tasks that
benefit the user. In this context the
term application refers to both the
application software and its
And since application really means application software, and software is any piece of code that performs any kind of task on a computer, I'd say also a library can be an application.
Most terms are of artificial nature anyway. Is a plugin no application? Is the flash plugin of your browser no application? People say no, it's just a plugin. Why? Because it can't run on it's own, it needs to be loaded into a real process. But there is no definition saying only things that "can run on their own" are applications. Same holds true for a library. The core application could just be an empty container and all logic and functionality, even the interaction with the user, could be performed by plugins or libraries, in which case that would be more an application than the empty container that just provides some context for the application to run. Compare this to Java. A Java application can't run on it's own, it must run within a Java Virtual Machine (JVM), does that mean the JVM is the application and the Java Code is just... well what? Isn't the Java code the real application and the JVM just an empty runtime environment that provides nothing to the end user without the loaded Java code?

I think in this context "application entry point" means "the point at which the application (your code) enters the library".

I think probably what you're referring to is the main() function in C/C++ code or WinMain in a Windows app. That is, it's the point where execution is normally started in an app. Your question is pretty broad and vague--for example, which OS are you running this on--but this may be what you're looking for. This might also address the question.
Bear in mind when you're asking questions, details are your friend. People can give you a much better, more informed answer when you provide them with details.
In a broader context consider what has to happen from the standpoint of the OS. When the user specifies that they want to run an app, the OS has to load the app from the hard drive and then when the app is loaded into memory, it has to pass control to some point in the memory blocked occupied by the newly loaded app to continue execution. That would be the "Application Entry Point". When an app is constructed with dynamically linked code the OS has to load all that dynamically linked code in order to get the correct app image into memory. Loading up those shared bits of code does not change the fact that the OS must have a point to which to pass control when the app is loaded into memory.


Observing or monitoring users working with an application remotely

I'm a believer in observing what users are doing with an application. I think that it is the only way to get an accurate picture of what people are doing. However, I don't always want to be sitting with them and peering over their shoulder; apart from the time burden it is a distracting for them and may also change their behaviour.
What I'd really like is a way of observing/recording/monitoring/logging what users are doing in a windows form application.
I'd also like to be able to analyse the actions of 50+ users and gather some stats.
And the observation is like this not like spying on people.
Got any suggestions?
You could use TimeSnapper, it takes a screenshot at configurable intervals and allows you to play back the screenshots like a movie. The free version does what you need. You can configure it to snapshot the whole desktop or just the active window.
For real-time observation you could use VNC which is again free.
Edit: for playing back the Timesnapper files from the remote PC, copy them from the configured snapshots directory to your own. You could also of course use Timesnapper to take screenshots of your VNC window, to capture the observation locally.
If you want to 'see' what they're doing, then the suggestions about screen-grabbing at intervals is probably your best bet. This will allow you to see HOW they do something (based on what they expect the UI to behave like), but not necessarily WHAT they are trying to achieve.
However, I would say that a log is probably a better method to see WHAT people are doing. If your log is detailed enough, you can tie every action to an event trace, and build up a time-line and kind of stacktrace of what the application did as a result. From this (and presuming your software UI is designed fairly well), you can see exactly what they were trying to do, but the details of HOW they did it have to be inferred (again, this varies depending on how detailed your log is).
No need to develop your own solution to this problem. There are quite a few commercially available apps for this.
Moderated remote usability testing: where you watch the user via VNC style streaming footage
Unmoderated remote usability testing: where users do the testing on their own, quantitative data is gathered.
Check out for more information
Disclaimer: I am a developer on the products that I am going to mention.
If you are looking for a tool that will let you know what a user is doing with your application but do not need actual screen captures you can look into using Runtime Intelligence from PreEmptive Solutions.
What we do is to inject additional code into your application after it is built (similar to PostSharp's IL Weaving) that sends data back to a central collection and reporting portal whenever methods that you decorate with attributes are executed. This allows you to track when users are using your application, which of the decorated methods are executed (and it can also give you measurements of how long the decorated methods take to execute) and if the user exited your application as expected or if there was an error. The data can be sent back either to servers hosted at PreEmptive (both a free and commercial version) or to any URL of your choosing if you want to capture and store the data yourself.
Since this relies on using Dotfuscator as the code injection engine this functionality can be added to any .NET application (console, Winforms, WPF, Silverlight, etc) as a post build step. We provide both a set of custom attributes if you want to decorate methods in your source or you can use our user interface to can specify which methods will be instrumented and that data will be stored inside of the Dotfuscator project file. If you use the Extended Attributes feature of storing the injection points in the project file you can completely instrument an application without touching the original source code.
We provide a hook so that you can give your user the choice of opting in or out of the usage tracking. Since we work at a low level this does require that you write a method, property or field that contains a boolean value indicating if data should be sent that we check at runtime. You are responsible for actually creating the UI for opt-in/opt-out.
A free version of all of this, including a freely accessible data reporting portal, is going to be available in Visual Studio 2010 as part of Dotfuscator Community Edition. You can go ahead and download Visual Studio 2010 Beta 1 and try it out today if you wish.
The free functionality is a subset of what is available in the commercial version but it will give you a good idea of how easy it is to use. As always, PreEmptive is happy to provide you with a free, time limited evaluation version of the commercial editions so you can test out the unlimited functionality version.
I am currently writing a series of blog articles on using this as part of Visual Studio 2010, the first one is here and an overview of everything coming in Visual Studio 2010 Beta 1 is here.
Runtime Intelligence is also available for use on any Java application by using DashO for Java as the code injection platform. There is currently no community version of this although there is always the time limited evaluation version.
Do you want a log of the actions? Or do you just want to observe?
Perhaps simply DrawToBitmap on a timer? (although this is a bit hit'n'miss; WPF has better tools here...)
Sometimes this isn't evil... we (actually, Steve) did something similar on the finguistics prototype to mimic a teacher's console (for tracking activity etc).
You can take a look at oDesk's small tool. They provide such a tool for monitoring oversea freelancers. The tool will capture screen shot with random interval, count the keyboard event number and mouse event number for every minute. Thus you know if he is working.
With this type of implementation, please provide some sort of privacy policy, notification, and dialog to ask the user's permission! As Marc Gravell stated, you can put DrawToBitmap on a timer and only capture your application window. These image may be large depending on your application, so that's a large consideration if you are transmitting over a network. I would look into some third party libraries possibly instead of reinventing the wheel here, as I did that with a remote desktop application, and while I got it to run at well over 60FPS using GDI+, I failed miserably at getting packet size down to a minimum so I could transmit over the internet.

Windows: How to intercept Win32 disk I/O API

On Windows, all disk I/O ultimately happens via Win32 API calls like CreateFile, SetFilePointer, etc.
Now, is it possible to intercept these disk I/O Win32 calls and hook in your own code, at run time, for all dynamically-linked Windows applications? That is, applications that get their CreateFile functionality via a Windows DLL instead of a static, C library.
Some constraints that I have are:
No source code: I won't have the source code for the processes I'd like to intercept.
Thread safety: My hook code may dynamically allocate its own memory. Further, because this memory is going to be shared with multiple intercepted processes (and their threads), I'd like to be able to serialize access to it.
Conditional delegation and overriding : In my hook code, I would like to be able to decide whether to delegate to the original Win32 API functionality, or to use my own functionality, or both. (Much like the optional invocation of the super class method in the overriding method of the subclass in C++ or Java.)
Regular user-space code: I want to be able to accomplish the above without having to write any device-driver, mainly due to the complexity involved in writing one.
If this is possible, I'd appreciate some pointers. Source code is not necessary, but is always welcome!
You may want to look into mhook if Detours isn't what you want.
Here are a couple of problems you may run into while working with hooks:
ASLR can prevent injected code from intercepting the intended calls.
If your hooks are global (using AppInit_DLLs for example), only Kernel32.dll and User32.dll are available when your DLL is loaded. If you want to target functions outside of those modules, you'll need to manually make sure they're available.
I suggest you start with Microsoft Detours. It's free edition also exists and its rather powerful stable as well. For injections you will have to find which injection method will work for your applications in target. Not sure whether you need to code those on your own or not, but a simple tool like "Extreme Injector" would serve you well for testing your approaches. And you definitely do not need any kernel-land drivers to be developed for such a simple task, in my opinion at least. In order to get the full help of me and others, I'd like to see your approach first or list more constraints to the problem at hand or where have you started so far, but had problems. This narrows down a lot chit-chats and can save your time as well.
Now, if you are not familiar with Detours from Microsoft (MSFT) please go ahead and download it from the following link: once you download it. You are required to compile it yourself. It's very straightforward and it comes with a compiled HTML help file and samples. So far your profiles falls under IAT (Import Address Table) and EAT (Export Address Table).
I hope this non-snippet answer helps you a little bit in your approach to the solution, and if you get stuck come back again and ask. Best of luck!

Silverlight vs ActiveX for lightweight app with system access

Just an R&D question. We need to develop an application that can be run in a browser that has the capability of performing some system checks to gather support information to be emailed to us. These checks will include basic system information, but also will need to scan the filesystem and pull out version information about various DLLS, executables, and .NET assemblies that might be installed. The idea being that we can direct a client to a page and have the application gather the relevant information needed for support, and potentially even populate some database fields. We need it to have as small a footprint as possible.
I've worked with ActiveX before, and know it is capable of these things, but particularly on modern systems security is a nightmare to get around, with a lot of people blocking ActiveX altogether. Is Silverlight easier to deliver to clients? Does it have a lighter footprint? Is it even capable of doing these things?
Silveright has access to isolated storage, but I don't think it can do what you are looking for (I may be wrong). As for footprint, if I remember correctly, the runtime is reasonably small, and the .xap packages are limited to 4Mb.
Silverlight out-of-browser has access to the file system.
If you intend to run your app in the browser, you will still have to configure the trust as if it where oob.
However, iTunes has a neat way of doing something somewhat related. It has a custom protocol (itms://) that allows the browser to invoke a client side program (iTunes). Then you can embed html in a webpage that passes parameters as command line arguments to that app. The website also knows if the iTunes is installed by a cookie. We this in mind, you might be able to encourage your users to install some small app that setups the custom protocol on install. You could pass command-line parameters to it from the web, and the app will push information from the client back to the server.
To create a real-time experience, you could use sockets + more javascript to update the page with the info you just got off the machine.
Silverlight runs in a pretty restricted silo and can't do a lot of low level things - such as checking the file system. So I would say it does not fit your use case, unfortunately.

Newbie to Qt4 embedded Linux - Application management, deployment and general architecture?

First off, I apologise hugely for asking such basic questions. I am in the process of deciding if I should use Qt on an embedded linux device (first attempt will be on a TI OMAP EVM) for developing a UI and also for managing applications that run on the device (and also adding removing applications during run time by over the air (WiFi) software downloads).
I have been reading the Nokia Qt reference documentation and feel like I have missed a basic step in my understanding.
If I may just clarify what i mean by an application (I am not sure the Qt documentation I have read aligns with this): An application is a program that runs on a device and uses the services of that device.
So I figure I can use Qt as an application framework, and invoke (or launch) Qt applications from it. Applications examples are: email client, mapping, notebook etc.
I would envisage one main window which has a list of the applications available (maybe icons like android etc) and then the applications are launched from this main window. If events come in from the system, then the application framework will route the events appropriately, and its possible that this will cause another application to use the full screen.
I'm struggling (as a complete newbie) to understand if this means i have to run an application and then run applications from that, or if there is some inbuilt mechanism in the Qt architecture to do this type of application launching.
So instead of asking a question directly about how to do that, i obviously need to start off from the basics. I’ve read about the QWSServer and QWSClient architecture and that makes sense in a vague way.
However, I can't find information on how to:
launch applications or manage them. (Who launches/suspends an application?)
Deployment models of applications (Are they in the same Linux process or thread as the QWSServer?)
How to add a application at run time?
I'm guessing I have missed a blindingly obvious top level document that explains this sort of basic functionality. It may be that I should invest the time in downloading the SDK and actually try using Qt (apologies again, I don't get much time to do proper work nowadays :( )
So, if anyone could point me in the direction of the relevant documents, it would be very much appreciated!
Qt is a windowing toolkit - not a window manager.
There are a few Qt window manager projects for small devices and of course the whole of KDE is written in Qt.
Qt/Embedded is really just Qt down to the hardware - rather than relying on the operating system or X windows to do the drawing. I think you might be confusing Qt with one of the Nokia mobile operating systems that use Qt for their gui.
QWS is a windowing system specifically designed to support Qt applications in embedded situations, in which there may be no other window manager (or acceptably lightweight one). It does a bit less than the heavyweight ones such as KDE or gnome, but handles things along the same lines. One of the aspects about it, however, is you can develop your own plugin to draw the window frames, title bars, etc., in order to style them the way you want to.
In reference to QWS, you asked about:
launch applications or manage them. (Who launches/suspends an application?)
The operating system launches and suspends applications. The QWS is a windowing system, not an operating system. In the cases I know about, it runs on top of linux variants. Your envisioned main window would probably best be developed as its own application that launches other applications in some manner.
Deployment models of applications (Are they in the same Linux process or thread as the QWSServer?)
They are generally in other processes than the window server. Depending on how you launch them, of course, they may be in the same process or a different process as your launchpad application. Beware a potential problem of running it in the same process: you can only have one QApplication instance in a given process.
How to add a application at run time?
I would assume your launchpad would provide a mechanism for adding an application, which would put it in the appropriate place on disk. You could use this to do any number of things to alter the list of applications to launch. One example would be to just update your GUI based on a blessed directory. Another option might be to have a separate plugin bundled with the applications, and your launchpad application loads those plugins to get information about the applications. Really, the possibilities are almost endless here, assuming you provide the entry point to install the applications on the system.

Best/standard method for slowing down Silverlight Prism module loading (for testing)

During localhost testing of modular Prism-based Silverlight applications, the XAP modules download too fast to get a feel for the final result. This makes it difficult to see where progress, splash-screens, or other visual states, needs to be shown.
What is the best (or most standard) method for intentionally slowing down the loading of XAP modules and other content in a local development set-up?
I've been adding the occasional timer delay (via a code-based storyboard), but I would prefer something I can place under the hood (in say the Unity loader?) to add a substantial delay to all module loads and in debug builds only.
Suggestions welcomed*
*Note: I have investigated the "large file" option and it is unworkable for large projects (and fails to create XAP with really large files with out of memory error). The solution needs to be code based and preferably integrate behind the scenes to slow down module loading in a local-host environment.
****Note: To clarify, we are specifically seeking an answer compatible with the Microsoft PRISM pattern & PRISM/CAL Libraries.**
Do not add any files to your module projects. This adds unnecessary regression testing to your module since you are changing the layout of the module by extending the non-executable portion. Chances are you won't do this regression testing, and, who knows if it will cause a problem. Best to be paranoid.
Instead, come up with a Delay(int milliseconds) procedure that you pass into a callback that materializes the callback you use to retrieve the remote assembly.
In other words, decouple assembly resource acquisition from assembly resource usage. Between these two phases insert arbitrarily random amounts of wait time. I would also recommend logging the actual time it took remote users to get the assembly, and use that for future test points so that your UI Designers & QA Team have valuable information on how long users are waiting. This will allow you to cheaply mock-up the end-user's experience in your QA environment. Just make sure your log includes relevant details like the size of the assembly requested.
I posed a question on StackOverflow a few weeks ago about something related to this, and had to deal with the question you posed, so I am confident this is the right answer, born from experience, not cleverness.
You could simply add huge files (such as videos) to your module projects. It'll take longer to build such projects, but they'll also be bigger and therefore take longer to download locally. When you move to production, simply remove the huge files.