ppolv’s blog

February 29, 2008

esolr, an erlang text search client library for Apache Solr

Filed under: erlang — Tags: , — ppolv @ 4:42 am

From the Apache Solr website:

Solr is an open source enterprise search server based on the Lucene Java search library, with XML/HTTP and JSON APIs, hit highlighting, faceted search, caching, replication, and a web administration interface. It runs in a Java servlet container such as Tomcat.

Nice, a full text search engine easily accessible from anywhere. Just HTTP, no special binding required.

I’ve just hacked esolr, a simple, almost untested and featureless erlang client for Solr ;-). Well, there wasn’t so many operations to implement really. The basic and usefull ones are

  • Add/Update documents esolr:add/1
  • Delete documents esolr:delete/1
  • Search esolr:search/2

Also, there are functions to perform commits to the index (to make all changes made since the last commit available for searching) and to optimize the index (a time consuming operation, see Solr documentation). Besides of issuing commits and optimize operations explicitly, the library also allows to perform that operations periodically at user-defined intervals. In the case of commits, these can also be specified to automatically take place after each add or delete operation (mainly usefull for development and not for production code).

Quick start:

  1. Install Solr 1.2
  2. Run it with the sample configuration provided (/example$ java -jar start.jar)
  3. Make sure that is correctly running, open a browser at http://localhost:8983/solr/admin/
  4. Get esolr from the trapexit forum
  5. Look at the html API documentation
  6. Compile the sources (RFC4627.erl, from http://www.lshift.net/blog/2007/02/17/json-and-json-rpc-for-erlang is included)
  7. Start the esolr library esolr:start_link()
  8. Play around

To compile, open an erlang console on the directory where the .erl files resides, and type:

28> c(rfc4627).
{ok,rfc4627}
29> c(esolr).
{ok,esolr}

then start the esolr process, using default configuration:

30>esolr:start_link().
{ok,}

Add some documents. Here we are adding two documents, one in each call to esolr:add/1. The id and name fields are defined in the sample Solr schema, id is.. you know, the ID for the document.

31> esolr:add([{doc,[{id,"a"},{name,<<"Look me mom!, I'm searching now">>}]}]).
ok
32> esolr:add([{doc,[{id,"b"},{name,<<"Yes, searching from the erlang console">>}]}]).
ok

Commit the changes.

33>esolr:commit().
ok

Search. We search for the word “search”, and specify that we want all the normal fields plus the document score for the query, that we want the result in ascendant order by id, and that we want the matchings highlighted for us.

34> esolr:search("search",[{fields,"*,score"},{sort,[{id,asc}]},{highlight,"name"}]).
{ok,[{"numFound",2},{"start",0},{"maxScore",0.880075}],
    [{doc,[{"id",<<"a">>},
           {"sku",<<"a">>},
           {"name",<<"Look me mom!, I'm searching now">>},
           {"popularity",0},
           {"timestamp",<<"2008-02-28T23:42:15.642Z">>},
           {"score",0.628625}]},
     {doc,[{"id",<<"b">>},
           {"sku",<<"b">>},
           {"name",<<"Yes, searching from the erlang console">>},
           {"popularity",0},
           {"timestamp",<<"2008-02-28T23:43:26.997Z">>},
           {"score",0.880075}]}],
    [{"highlighting",
      {obj,[{"a",
             {obj,[{"name",
                    [<<"Look me mom!, I'm <em>searching</em> now">>]}]}},
            {"b",
             {obj,[{"name",
                    [<<"Yes, <em>searching</em> from the erlang "...>>]}]}}]}}]}

Read the API docs to find all the functions/options implemented so far.

Have fun!

Advertisements

February 25, 2008

Parsing CSV in erlang

Filed under: erlang — Tags: , , — ppolv @ 9:23 pm

So I need to parse a CSV file in erlang.

Although files in CSV have a very simple structure, simply calling lists:tokens(Line,”,”) for each line in the file won’t do the trick, as there could be quoted fields that spans more than one line and contains commas or escaped quotes.

A detailed discussion of string parsing in erlang can be found at the excellent Parsing text and binary files with Erlang article by Joel Reymont. And the very first example is parsing a CSV file!; but being the first example, it was written with simplicity rather than completeness in mind, so it didn’t take quoted/multi-line fields into account.

Now, we will write a simple parser for RFC-4180 documents ( witch is way cooler than parse plain old CSV files 😉 ) . As the format is really simple, we won’t use yecc nor leex, but parse the input file by hand using binaries,lists and lots of pattern matching.

Our goals are

  • Recognize fields delimited by commas, records delimited by line breaks
  • Recognize quoted fields
  • Being able to parse quotes, commas and line breaks inside quoted fields
  • Ensure that all records had the same number of fields
  • Provide a fold-like callback interface, in addition to a return-all-records-in-file function

What the parser won’t do:

  • Unicode. We will treat the file as binary and consider each character as ASCII, 1 byte wide. To parse unicode files, you can use xmerl_ucs:from_utf8/1, and then process the resulting list instead of the raw binary

A quick lock suggest that the parser will pass through the following states:
cvs parsing states

  • Field start
  • at the begin of each field. The whitespaces should be consider for unquoted fields, but any whitespace before a quoted field is discarded

  • Normal
  • an unquoted field

  • Quoted
  • inside a quoted field

  • Post Quoted
  • after a quoted field. Whitespaces could appear between a quoted field and the next field/record, and should be discarded

Parsing state

While parsing, we will use the following record to keep track of the current state

-record(ecsv,{
   state = field_start,  %%field_start|normal|quoted|post_quoted
   cols = undefined, %%how many fields per record
   current_field = [],
   current_record = [],
   fold_state,
   fold_fun  %%user supplied fold function
   }).

API functions

parse_file(FileName,InitialState,Fun) ->
   {ok, Binary} = file:read_file(FileName),
    parse(Binary,InitialState,Fun).
    
parse_file(FileName)  ->
   {ok, Binary} = file:read_file(FileName),
    parse(Binary).

parse(X) ->
   R = parse(X,[],fun(Fold,Record) -> [Record|Fold] end),
   lists:reverse(R).
		
parse(X,InitialState,Fun) ->
   do_parse(X,#ecsv{fold_state=InitialState,fold_fun = Fun}).

The tree arguments functions provide the fold-like interface, while the single argument one returns a list with all the records in the file.

Parsing

Now the fun part!.
The transitions (State X Input -> NewState ) are almost 1:1 derived from the diagram, with minor changes (like the handling of field and record delimiters, common to both the normal and post_quoted state).
Inside a quoted field, a double quote must be escaped by preceding it with another double quote. Its really easy to distinguish this case by matching against

<<$",$",_/binary>>

sort of “lookahead” in yacc’s lexicon.

 
%% --------- Field_start state ---------------------
%%whitespace, loop in field_start state
do_parse(<<32,Rest/binary>>,S = #ecsv{state=field_start,current_field=Field})->		
	do_parse(Rest,S#ecsv{current_field=[32|Field]});

%%its a quoted field, discard previous whitespaces		
do_parse(<<$",Rest/binary>>,S = #ecsv{state=field_start})->		
	do_parse(Rest,S#ecsv{state=quoted,current_field=[]});

%%anything else, is a unquoted field		
do_parse(Bin,S = #ecsv{state=field_start})->
	do_parse(Bin,S#ecsv{state=normal});	
		
		
%% --------- Quoted state ---------------------	
%%Escaped quote inside a quoted field	
do_parse(<<$",$",Rest/binary>>,S = #ecsv{state=quoted,current_field=Field})->
	do_parse(Rest,S#ecsv{current_field=[$"|Field]});		
	
%%End of quoted field
do_parse(<<$",Rest/binary>>,S = #ecsv{state=quoted})->
	do_parse(Rest,S#ecsv{state=post_quoted});
	
%%Anything else inside a quoted field
do_parse(<<X,Rest/binary>>,S = #ecsv{state=quoted,current_field=Field})->
	do_parse(Rest,S#ecsv{current_field=[X|Field]});
	
do_parse(<<>>, #ecsv{state=quoted})->	
	throw({ecsv_exception,unclosed_quote});
	
	
%% --------- Post_quoted state ---------------------		
%%consume whitespaces after a quoted field	
do_parse(<<32,Rest/binary>>,S = #ecsv{state=post_quoted})->	
	do_parse(Rest,S);


%%---------Comma and New line handling. ------------------
%%---------Common code for post_quoted and normal state---

%%EOF in a new line, return the records
do_parse(<<>>, #ecsv{current_record=[],fold_state=State})->	
	State;
%%EOF in the last line, add the last record and continue
do_parse(<<>>,S)->	
	do_parse([],new_record(S));

%% skip carriage return (windows files uses CRLF)
do_parse(<<$r,Rest/binary>>,S = #ecsv{})->
	do_parse(Rest,S);		
		
%% new record
do_parse(<<$n,Rest/binary>>,S = #ecsv{}) ->	
	do_parse(Rest,new_record(S));
	
do_parse(<<$, ,Rest/binary>>,S = #ecsv{current_field=Field,current_record=Record})->	
	do_parse(Rest,S#ecsv{state=field_start,
					  current_field=[],
					  current_record=[lists:reverse(Field)|Record]});


%%A double quote in any other place than the already managed is an error
do_parse(<<$",_Rest/binary>>, #ecsv{})->	
	throw({ecsv_exception,bad_record});
	
%%Anything other than whitespace or line ends in post_quoted state is an error
do_parse(<<_X,_Rest/binary>>, #ecsv{state=post_quoted})->
 	throw({ecsv_exception,bad_record});

%%Accumulate Field value
do_parse(<<X,Rest/binary>>,S = #ecsv{state=normal,current_field=Field})->
	do_parse(Rest,S#ecsv{current_field=[X|Field]}).

Record assembly and callback

Convert each record to a tuple, and check that it has the same number of fields than the previous records. Invoke the callback function with the new record and the previous state.

%%check	the record size against the previous, and actualize state.
new_record(S=#ecsv{cols=Cols,current_field=Field,current_record=Record,fold_state=State,fold_fun=Fun}) ->
	NewRecord = list_to_tuple(lists:reverse([lists:reverse(Field)|Record])),
	if
		(tuple_size(NewRecord) =:= Cols) or (Cols =:= undefined) ->
			NewState = Fun(State,NewRecord),
			S#ecsv{state=field_start,cols=tuple_size(NewRecord),
					current_record=[],current_field=[],fold_state=NewState};
		
		(tuple_size(NewRecord) =/= Cols) ->
			throw({ecsv_exception,bad_record_size})
	end.

Final notes

We used a single function, do_parse/2, with many clauses to do the parsing. In a more complex scenario, you probably will use different functions for different sections of the grammar you are parsing. Also you could first tokenize the input and then parse the resulting token stream, this could make your work simpler even if your aren’t using a parser generator like yecc (this is the approach i’m using to parse ldap filters).

February 15, 2008

erlang, ssl and asn1

Filed under: erlang — Tags: , , — ppolv @ 3:44 pm

I’ve been playing with erlang and asn1, implementing a ldap plugin for the tsung load testing tool. Erlang comes with nice support for work with ber-encoded asn1 data; particularly handy is its aviility to recognize packets borders and deliver network data, one asn1 packet at a time rather than as a raw byte stream.

One of the extended operations defined in the ldap protocol, the startTLS command, allows the use of unencrypted,plain tcp socket to connect to the server and later “upgrade” the same connection to use ssl. To implement this in erlang, the way to go is to use the new ssl module, since it is capable of establish a ssl session over an already connected tcp_gen socket, something than previous OTP versions can’t. Sadly for me, this new ssl module seems to not be able to recognize asn1 packets yet. Luckily, the buffering code required is very simple to implement in erlang.

Here is the the code i’m using:

%%The buffer consist of the data received and the length of the current packet,
%%undefined if the length is still unknown.
-record(asn1_packet_state,    {
    length = undefined,
    buffer = <<>>
}).

%%The push function simply appends the data to the end of the buffer.
push(<<>>,S) ->
    S;
push(Data,S =#asn1_packet_state{buffer = B}) ->
    S#asn1_packet_state{buffer = <<B/binary,Data/binary>>}.

%% Try to extract a packet from the buffer, if the length is unknown, calculate it first.
get_packet(S = #asn1_packet_state{length=undefined,buffer= <<>>}) ->
    {none,S};
get_packet(S = #asn1_packet_state{length=undefined,buffer=Buffer}) ->
    case packet_length(Buffer) of
        {ok,Length} -> extract_packet(S#asn1_packet_state{length=Length});
         not_enough_data -> {none,S}
    end;
get_packet(S) -> extract_packet(S).

%% Extract the packet if there is enough data available.
extract_packet(#asn1_packet_state{length=N,buffer=Buffer}) when (size(Buffer) >= N) ->
    <<Packet:N/binary,Rest/binary>> = Buffer,
    {packet,Packet,#asn1_packet_state{length=undefined,buffer=Rest}};

extract_packet(S) when is_record(S,asn1_packet_state) -> {none,S}.

%%Extract the packet size from the packet header.
packet_length(Buffer) ->
    try asn1rt_ber_bin:decode_tag_and_length(Buffer) of
       {Tag, Len,_Rest,RemovedBytes} ->  {ok,Len+RemovedBytes}
    catch
        _Type:_Error ->
            if
                (size(Buffer) > ?MAX_HEADER) -> throw({invalid_packet,Buffer});
                true -> not_enough_data  %%incomplete header
            end
    end.

So whenever you get data from the network, you push/2 then into the buffer. Then you can get_packet/1 from the buffer , keeping in mind that there could by no complete packet yet, or more than one packet could be present in the buffer; get_packet/1 will return either {none,Buffer} or {packet,Packet,Buffer}.

February 4, 2008

Isn’t performance, is scalability

Filed under: random — Tags: , — ppolv @ 8:09 pm

Recently I’ve been reading about cache technologies, memcached in particular. The architecture behind memcached is what catch my attention: its so clear and simple. Beauty .

Then, I reach to this performance comparison between memcached and ehcache (almost a yer old, but I just read it, sorry). At first I was somewhat surprised and to be honest, disappointed. I thought memcached should compare better than that… but … wait…
The author of the article has experience in the word of java caching (actually, he is the maintainer of ehcache), but I think the benchmark isn’t really fair. While the memcached setup is already a distributed one, the ehcache setup don’t. While the memcached setup is ready to hold TERABYTES of data, the ehcache setup don’t.

I don’t say that ehcache can’t be distributed, what i say is that the costs associated with ehcache distribution aren’t represented in the benchmark.

Anyway, the point isn’t about performance, its scalability what matters!. And that’s why you should always read benchmarks carefully.
If my web application isn’t a big, heavy-accessed one (like the intranet applications I’m used to work with) then an entirely in-process caching strategy is fine, and easy to implement and understand too.
But for high-load scenarios, with many web servers involved, benchmarks like this one doesn’t apply.

Update: Unfair Benchmarks of Ehcache vs Memcached another response to the original article.

Create a free website or blog at WordPress.com.