Tuesday, November 07, 2006

Financial education

This subject really bugs me for some time. I'm software developer, engineer. In school and university we didn't study that subject. Now I see that its one of the most important and SHOULD be studied, since everything in our life is connected with finances.

Good start to understand why financial education is so important is the books by Robert Kiosaki. For Russian and Ukranian readers there's e-version of his books here

Rather good blog about finances is Get Rich Slowly

Friday, October 06, 2006

Interested On How These Invisible Little Things Look Like?

When I was a child I really enjoyed studying different little things like leaves, insects and spores with microscope.

To pity that there were no such sites like this in that time.

Windows Live Writer Team Appears To Be Of Non Microsoft Origin

Rob Mensching recently met Windows Live Writer team. And it appears that this team was acquired by Microsoft.

Microsoft tries its best to enhance its Live initiative. Well, with such brilliant teams we can expect more interesting stuff to be released under Live.

Tuesday, September 19, 2006

Does your network application support IPv6?

One of the ways to find out about this is to add an IPv6 address to you computer, and make the application use it. If you observe no crashes and connectivity is fine, then you're okay and there is no need to read further :8-).

What can you do to be IPv6 "compatible"? At first start from here.

If your application is managed one and you use sockets for network I/O then the only thing you should remeber is to check IpAddress.AddressFamily property.

In code this can look like this (error checking is removed for simplicity's sake)

public void Connect(string host, int port)


{

IPHostEntry ipHostEntry = Dns.GetHostEntry(host);

//and now we're creating socket with appropriate address family

IPEndPoint ipEP = new IPEndPoint(ipHostEntry.AddressList[0], port);

Socket socket = new Socket(ipEP.AddressFamily, SocketType.Stream, ProtocolType.Tcp);

socket.Connect(ipEP);

}



Many developers are creating sockets assuming that there will always be IPv4. Generally this works as IPv6 addresses are not common these days. But times are changing and we have to be prepared...

Wednesday, September 06, 2006

Finance news from Ukraine

Once again I had to use Feedburner to convert feed fomat. The trick is the same as described here. This time it was http://news.finance.ua/ua/rss. I'm using IE7 RC1 and it doesn't recognize format of this feed.

So, here is new feed http://feeds.feedburner.com/Financeua in the RSS 2.0 format.

.NET: tricky enums with custom attributes

Sometimes, when we want to serialize data type we use attributes to give additional description for type itself and its fields.

Imagine that the type we want to serialize has enum field (SampleEnum)


public class ExtendedInfoAttribute : Attribute
{
string description;

public string Description
{
get { return description; }
set { description = value; }
}

}



public enum SampleEnum
{
[ExtendedInfo(Description="First value")]
EnumValueOne,
[ExtendedInfo(Description = "Second value")]
EnumValueTwo
}



ExtendedInfo attribute provides additional info about enum fields. When serializing this attribute value can be used to provide some additional info about enum fields.


So, what's so special about getting these attribute values? Well, nothing special if you known where to look for :8-)

//errors checking is omitted for clarity
SampleEnum sEnum = SampleEnum.EnumValueTwo;

Type type = sEnum.GetType();
FieldInfo fieldInfo = type.GetField(Enum.GetName(type, sEnum));
ExtendedInfoAttribute[] attrs = (ExtendedInfoAttribute[])fieldInfo.GetCustomAttributes(
typeof(ExtendedInfoAttribute), false);
Console.WriteLine(attrs[0].Description);

Sunday, September 03, 2006

Saturday, August 19, 2006

HTTP: Proxy Design Considerations.

At first let’s make short description of what HTTP proxy does.

Basically, it receives HTTP requests and routes them to remote web server or another proxy.

How does proxy know where to send requests?
Well, in order to known that proxy has to parse incoming HTTP requests and obtain URI part of the HTTP request.

Note:
HTTP request consists of header line, headers and values and possibly content. Header line with headers is terminated with double CRLF sequence ( CRLF stands for carriage return and line feed or  \r\n escape characters). Then may or may not come content (it depends on request type GET, POST etc).

So, the workflow will be: proxy receives HTTP request, parses/analyzes it and routes to appropriate server or another proxy.

How efficient is that?

Well, if we want proxy with ability to process HTTP content, then we'll design it in such a way that whole HTTP request's content will be received by proxy and then parsed/analyzed ( I will not cover that in this post). But if we want our proxy to merely route requests then the approach described above will be very inefficient. Because total size of HTTP request can be quite large, receiving it completely can lead to great memory consumption.

Solution here can be quite simple. HTTP header contains all the info proxy needs to route the request. So, proxy can be designed in such a way that it will receive only full HTTP header, parse/analyze it.  And if there is content pending it will be immediately routed to destination pointed out by request's header.
An indication of that fact that content is pending is: we have HTTP POST request, Content-Length header is bigger then 0. 

 This approach will be more efficient, since it assumes that less memory will be allocated to process one HTTP request. Also this approach will speed up traffic through proxy.

Thursday, August 10, 2006

Windows Service Start Issues. .NET ServiceBase class

In .NET world Windows Service is represented by ServiceBase class
from System.ServiceProcess namespace. Service developer inherits her
own class from ServiceBase and overrides OnStart and OnStop methods.

Then to start service ServiceBase.Run(...) call is needed and here comes
an interesting part...

Usually process-wide initialization occurs in OnStart overload.
But what will happen if that initialization takes longer than 30 seconds?
( 30 sec. is the default time that Service Control Manager - SCM will wait
for the service to start ).

There are to ways not to get in troubles here:
(1) ask for more time to finish initialization, or
(2) do process-wide initialization in separate thread.

Both ways have advantages and disadvantages.
In first approach to ask for more time ServiceBase.RequestAdditionalTime(...) is used. Benefit here is that the code that starts service will know
for sure that service is up and running.

Second approach will give the illusion that service is up and running, while
internal initialization may not be finished. This can cause strange behavior.

First approach can be used when service is interacting with something (sends/receives data etc.).
While second approach will suit best for scenarios where service is a standalone application that is not communicating with anything except SCM :8-)

Monday, July 31, 2006

Back from the army

Finally I'm back from military vacations :8-)

After 2 years of studying in the military faculty, every student has to go to the army. Hopefully, the term is 1 month.

So, I'm back, healty, sun-burnt and ready to work.

Friday, June 30, 2006

Converting RSS feed to different formats using FeedBurner services

So, why should anyone bother converting RSS feed format? The answer is that RSS feeds can have various formats: RSS 1.0, RSS 2.0, Atom 0.3, Atom 1.0, etc. RSS reader has to know these formats and display the feed normally as if its format is the same for all feeds.

At present I’m using SharpReader as my primary RSS reading tool and there is one thing that I’m annoyed with – its memory consumption. I have smth around 180 feeds in my feed list. After SharpReader refreshes them all, its memory consumption rises to more then 200 Mb. It is not appropriate for me.

Recently I've obtained link that lead me to the download page of Internet Explorer 7 Beta3.
Though there was Beta2 out there already for quite long, I didn’t want to bother myself with “raw” product, especially if it is Internet Explorer :8-).

However, now it is Beta3 and I thought why not? Installation was okay. Then I exported RSS feeds from SharpReader into OPML file and imported it into IE.
Import was also fine, feeds hierarchy was preserved, what I really appreciate.

I’ll omit describing some inconveniences with feeds browsing I experienced with IE, and focus myself on the issue that can explain the title of this post :8-).

IE 7 Beta3 supports only several RSS formats!!! I was surprised. Namely, IE doesn’t have support for RDF format.

So, if RDF is not supported I have to convert it to something that is supported, for example RSS 2.0. How to resolve? FeedBurner comes to rescue!

Here are the steps do obtain converted feed:
1) go to http://www.feedburner.com
2) if you have an account there - sign in, if not – register and sign in
3) burn a feed. Put in the url of the feed that has unsupported format
4) after feed is burnt, go to main page, select it and then select “Optimize” tab
5) then select “Convert Format Burner”. This option will let you select appropriate RSS format. Choose “Save” and that’s it, you can use new feed url instead of the old one.

Example of such converted feed is
RDF: http://www.ixbt.com/export/articles.rdf
RSS 2.0: http://feeds.feedburner.com/Ixbt

Wednesday, June 21, 2006

Microsoft Robotics Studio

Nowadays, robots become more and more popular. Asian countries: South Korea and Japan are the leaders in the robotics.

Recently, member of Windows Mobile team blogged about creation of a robot that works under Windows Mobile ( http://www.wimobot.com ).

Microsoft, recently announced its Microsoft Robotics Studio.

Things get, more and more interesting :8-)

Thursday, June 01, 2006

Deferred custom actions with WiX

Definition of Deferred Custom Actions


The installer does not execute a deferred execution custom action at the time the installation sequence is processed. Instead the installer writes the custom action into the installation script.


Custom actions that set properties, feature states, component states, or target directories, or that schedule system operations by inserting rows into sequence tables, can in many cases use immediate execution safely. However, custom actions that change the system directly, or call another system service, must be deferred to the time when the installation script is executed.

Purpose of deferred Custom Actions


The purpose of a deferred execution custom action is to delay the execution of a system change to the time when the installation script is executed. This differs from a regular custom action, or a standard action, in which the installer executes the action immediately upon encountering it in a sequence table or in the form with tag.


<Publish Event="DoAction" Value="CustomActionName">
<![CDATA[1]]>
</Publish>

How to define deferred custom action in WiX


Deferred custom action is defined in the following way:


<CustomAction Id="MyAction" Return="check" Execute="deferred"
BinaryKey="CustomActionsLibrary" DllEntry="_MyAction@4"
HideTarget="yes"/>


Lets describe the sample above:
- Execute=“deferred” – means that custom action with Id “MyAction” will execute in deferred mode ( in-script ).
- DllEntry="_MyAction@4" – the name of the function to be called, when installer executes generated install script.
- HideTarget="yes" - for security reasons, you may transfer confidential info to your custom action, this attribute notifies installer not to log parameters passed to custom action.

How to Transfer Properties to Deferred Custom Action


Because the installation script can be executed outside of the installation session in which it was written, the session may no longer exist during execution of the installation script. This means that the original session handle and property data set during the installation sequence is not available to a deferred execution custom action. Deferred custom actions that call dynamic-link libraries (DLLs) pass a handle which can only be used to obtain a very limited amount of information.


Properties that can be retrieved during in-script execution are:

- CustomActionData - value at time custom action is processed in sequence table. The “CustomActionData” property is only available to deferred execution custom actions. Immediate custom actions do not have access to this property.
- ProductCode - unique code for the product, a GUID string.
- UserSID - set by the installer to the user's security identifier (SID).


If other property data is required by the deferred execution custom action, then their values must be stored in the installation script. This can be done by using a second custom action.
In order to write the value of a property into the installation script for use during a deferred execution custom action we have to do the following:


- Insert a small custom action into the installation sequence that sets the property of interest to a property having the same name as the deferred execution custom action. For example, if Id for the deferred execution custom action is "MyAction" set a property named "MyAction" to the property X which you need to retrieve. You must set the "MyAction" property in the installation sequence before the "MyAction" custom action.


Although any type of custom action can set the context data, the simplest method is to use a property assignment custom action.


At the time when the installation sequence is processed, the installer will write the value of property X into the execution script as the value of the property CustomActionData.


Lets illustrate the above said with WiX sample.


At first we define custom action that will assign CustomActionData property.


<Property Id=”SOME_PUBLIC_PROPERTY”>
<![CDATA[Hello, from deferred CA]]>
</Property>


<CustomAction Id="MyAction.SetProperty" Return="check"
Property="MyAction" Value="[SOME_PUBLIC_PROPERTY]">
</CustomAction>


Then we put above defined custom action into execution sequence along with “MyAction” deferred custom action


<InstallExecuteSequence>
<Custom Action="MyAction.SetProperty" After="ValidateProductID"/>
<Custom Action="MyAction" After="MyAction.SetProperty"/>
</InstallExecuteSequence>


Our custom action will reside in DLL. Below is the sample where we retrieve “SOME_PUBLIC_PROPERTY”, during deferred (in-script) installer execution.


<Binary Id='CustomActionsLibrary'
SourceFile='Binary\CustomActionsLibrary.dll' />


#include <windows.h>
#include <msi.h>
#include <msiquery.h>
#include <tchar.h>


#pragma comment(linker, "/EXPORT:MyAction=_MyAction@4")


extern "C" UINT __stdcall MyAction (MSIHANDLE hInstall)

{

TCHAR szActionData[MAX_PATH] = {0};

MsiGetProperty (hInstall, "CustomActionData",
szActionData,sizeof(szActionData));


::MessageBox(NULL, szActionData, _T(“Deferred Custom Action”),
MB_OK | MB_ICONINFORMATION);


return ERROR_SUCCESS;


}


If there is need to transfer multiple properties into deferred custom action, “CustomActionData” property may contain value pairs e.g. PropertyName=PropertyValue with some separator symbol ( ; ).

Friday, May 26, 2006

WiX: Changing entry name in ProgramMenuFolder in the runtime

Recently I was developing installation package using WiX toolset. And encountered interesting problem.


Installation package after installing application, also installs shortcuts into “Start” program menu, ( in WiX constant for it is “ProgramMenuFolder“ ).

The task was to set the name of Programs sub-entry with specific ( dynamic name ). This name is generated during installation time.

So, how to resolve?

It appears that if you write

<Directory Id="ProgramMenuFolder" Name="PMenu" LongName="Programs">
<Directory Id="ProgramMenuDir" Name='Comp' LongName="Full Company Name">
</Directory>
</Directory>

you expose “ProgramMenuDir” as public property, which you can change.

Okay, the question arises how and when this property should be changed, so that installer used new value for creating the Programs menu sub-entry?

To set property value we can use custom action

<CustomAction Id="DIRCA_SETPROGRAMFOLDER" Return="check" Property="ProgramMenuDir"

Value="[ProgramMenuFolder]Company Title - [PUBLIC_PROPERTY]"></CustomAction>

Values in square brackets correspond to names of properties defined during installation. Square bracket notation means

[PROPERTY1] – take value of property with name “PROPERTY1”.

Good, we have custom action, that changes the property. When should it execute is another question?

Properties that are used as directory names are finally set during CostFinalize process, so to change the values of these properties we have

to launch our custom action before CostFinalize. Piece of cake!

<InstallUISequence>

<Custom Action="DIRCA_SETPROGRAMFOLDER" Before="CostFinalize"></Custom>

</InstallUISequence>

<InstallExecuteSequence>

<Custom Action="DIRCA_SETPROGRAMFOLDER" Before="CostFinalize"></Custom>

</InstallExecuteSequence>

Now, after installer successfully finished we have nice and custom entry in ProgramMenuFolder.

Saturday, May 13, 2006

Blog template changes

I've made some changes to the template of this blog. On the right side there is RSS link with standard image, like this

Friday, May 05, 2006

public vs private properties in MSI

Intro

Sooner or later every developer after development stage faces deployment stage. This stage turns out in developing another application, namely installer.

There are a lot of installers out there. Here I will enumerate some NSIS - Nullsoft scriptable install system, InstallShield, WISE, WiX and a lot of others. The latter use Microsoft Installer technology. All of them, except Nullsoft installer, produce .msi and other MS Installer files.

Properties in MSI

Since MSI file is a collection of tables, properties are placed in special table called, I think you will guess - "Property" :8-).So, what’s so special about these properties?

Custom Actions and Properties

Every installer developer should be aware that there are 2 types of properties in MSI. Values from Properties table are used by an installer as global variable during install process.

Information about types of properties is of special value for "custom actions" (CA) developers. CAs are special programs (.exe) or modules (.dll) that are called during installation process to perform special actions.

Now, let’s get back to properties...

Docs say:
- Private properties: The installer uses private properties internally and
their values must be authored into the installation database or set to values determined by the operating environment.
- Public properties: Public properties can be authored into the database and changed by a user or system administrator on the command line, by applying a transform, or by interacting with an authored user interface.

You will ask, how Installer determines which are private and which are public?
It is very simple public properties must be uppercase, and that’s it.

So, if your custom action stopped working and the problem is that it cannot read property from msi, then the first thing you should check is if this property is public.

Sunday, March 19, 2006

Issues when using MarshalByRefObject instances from several application domains

At first let’s consider why we need to inherit our classes from MarshalByRefObject.
The reason is very simple - we want use them in separate application domains.

Let’s suppose we have to appdomains: appdomain1 and appdomain2.
(In this post I will not cover the details of application domain creation)

public class MainAppD : MarshalByRefObject
{
public SlaveAppD slaveAppDomain;
public void DoInPrimaryAppDomain()
{
}
}

//this class will be executing on second app domain
public class SlaveAppD : MarshalByRefObject
{
public MainAppD mainAppDomain;
public void DoInSlaveAppDomain()
{
}
}

What's wrong with these classes?

Recently I had similar scenario, and wondered why did my objects get disconnected from other appdomain.
When object is disconnected and you try to call its methods you get RemotingException...

Let’s consider lifetime of the above mentioned
class members that are references, namely SlaveAppD.slaveAppDomain and MainAppD.mainAppDomain
will become __TransparentProxy references. All calls will go through these transparent proxies.

Notions of proxies and application domains are directly connected with notion lifetime.

If we will consider classes above neither of them cares about its lifetime.
MarshalByRefObject has InitializeLifetimeService() method that returns time lease. The lease specifies
how long this object can be "alive" (Basically this means that proxy connection is alive).
The default lease value is 5 minutes. To increase lifetime
one has to override InitializeLifetimeService. Lets set lease time to 15.

public override Object InitializeLifetimeService()
{
ILease lease = (ILease)base.InitializeLifetimeService();
if (lease.CurrentState == LeaseState.Initial)
{
lease.InitialLeaseTime = TimeSpan.FromMinutes(15);
lease.SponsorshipTimeout = TimeSpan.FromMinutes(2);
lease.RenewOnCallTime = TimeSpan.FromSeconds(2);
}
return lease;
}

If you want infinite proxy connection lifetime the you simply return null.

//Infinite lifetime lease
public override Object InitializeLifetimeService()
{
return null;
}

Saturday, March 18, 2006

Blog connectivity

In order to stay always connected to this blog ( if I will change hosting ), please use this link for syndication link

Thursday, March 02, 2006

Beware of asynchronous methods calls that can complete synchronously. Part 1

In .NET every method can be executed asynchronously, it means that it will be executed in the separate thread, typically ThreadPool thread. Here is sample code…

class Program
{
public delegate void LongMethodDelegate();

static void Main(string[] args)
{
Console.WriteLine("Main thread: {0}",
Thread.CurrentThread.ManagedThreadId);

LongMethodDelegate @delegate =
new LongMethodDelegate(LongRunningMethod);

IAsyncResult ar = @delegate.BeginInvoke(
new AsyncCallback(OnComplete), null);
ar = @delegate.BeginInvoke(new AsyncCallback(OnComplete), null);
//@delegate.EndInvoke(ar);

Console.ReadLine();
}

static void LongRunningMethod()
{
Thread.Sleep(10000); //long running task
Console.WriteLine("--------- LongRunningMethod");
Console.WriteLine("Executing on thread: {0}",
Thread.CurrentThread.ManagedThreadId);
}

static public void OnComplete(IAsyncResult ar)
{
Thread thread = Thread.CurrentThread;

Console.WriteLine("--------- OnComplete");
Console.WriteLine("Completed on thread: {0}\n" +
"Synchronously: {1}\n" +
"ThreadPool thread: {2}\n" +
"IsBackground thread: {3}",
thread.ManagedThreadId,
ar.CompletedSynchronously.ToString(),
thread.IsThreadPoolThread.ToString(),
thread.IsBackground.ToString());
}
}

Everything is pretty simple here, LongRunningMethod() is executed on the separate thread, OnComplete is called when LongRunningMethod()is completed. The thread that completes the work is called WorkerThread.

While using this approach – method that is being called asynchornously will be executed on the separate thread… However, if you noticed IAsyncResult has CompletedSynchronously property. What for? It is a good question, and I hope soon it will be answered :8-)

Monday, February 27, 2006

Ukrainian version of blog started

Since I live in Ukraine ( not so small republic in the central-east Europe ) I decided to start a mirror blog in Ukrainian.

You can find it here

Sunday, January 22, 2006

Proxy Server behavior with different HTTP protocol versions

In one of my previous posts I mention HTTP tunnel application that I’m working on. Basically it can be considered http proxy. Lets see now what difficulties can be encountered when implementing proxy server

At present there are 2 official versions of HTTP protocol – 1.0 and 1.1 From proxy point of view most significant difference between them is that be default 1.0 doesn’t support persistent connections with web server, while the other ( 1.1 ) by default supports them. Also to control web server behavior Connection header can be specified. That is Connection: close will signal web server to close connection when response was sent.

In this situation proxy server can have several option modes:
- operate as if there is no proxy server at all – that is if client uses HTTP 1.0 then proxy also uses HTTP 1.0 and the same for HTTP 1.1
- for the client side (local) use client specified protocol, for web server connection (remote) maintain other protocol. As an example proxy server with client works under HTTP 1.0 and with server under HTTP 1.1. Proxy server doesn’t ignore Connection header
- the same as the above except proxy server ignores Connection header that is proxy server by all means tries to maintain persistent connection.

Last scenario can be used when the remote endpoint isn’t web server but another proxy.

In RFC 2616 [HTTP/1.1] proxy behavior is described like this:
“It is especially important that proxies correctly implement the
properties of the Connection header field as specified in section
14.10.

The proxy server MUST signal persistent connections separately with
its clients and the origin servers (or other proxy servers) that it
connects to. Each persistent connection applies to only one transport
link.

A proxy server MUST NOT establish a HTTP/1.1 persistent connection
with an HTTP/1.0 client (but see RFC 2068 for information and
discussion of the problems with the Keep-Alive header implemented by
many HTTP/1.0 clients).”

From this part of the RFC we can see that the last operation mode can be considered as “hard optimization”.

Another tricky moment while implementing proxy server for HTTP 1.1 is Content-Length header. Generally setting this header simplifies content retrieval from the server. However, when content size is big - modern web servers can omit Content-Length header in order to boost performance and reduce the amount of resources allocated on the server. In practice we have the situation, when client is receiving data without any clue how much data is still not received or where will be the end of the data stream.

Proxy server has to know about this issue and handle it correctly. Here we also have 2 options: proxy server can receive whole data from the server, set Content-Length header and transmit whole data to the server, and the second option will be redirecting data stream to the client as if there is no proxy at all. One of the pitfalls here is that if content is too large proxy server can consume great amount of system resources ( 1 option ). While in the 2-nd mode proxy doesn’t know when data stream will finish.
At first I’ll implement 1 mode ( that is caching content on the proxy and then sending it to client ), then probably I’ll experiment with the second option.