AltME: Announce

Messages

Maxim
(here in altme, but in the ann-Reply group ;)

Kaj
I implemented SINGLE? and OFFSET? for Red in common.red:
http://red.esperconsultancy.nl/Red-common/dir?ci=tip
I've also started collecting commonly used PARSE rules in there, so far for string parsing:
whitespace
digit
non-zero ("1" - "9")
AdrianS
Kaj, for offset? should the function not test that the series passed in refer to the same undelying series (but a different position)?  I see that Rebol doesn't do this check either, but wouldn't it make sense to do so? What is the point of comparing positions in different underlying series?

Gregg
My implementation of OFFSET? in Red is exactly the same as Kaj's. I didn't thinl about adding that check either, but I think I have used it on different series at least once in the past, tracking progress in two different queues.
Kaj
By request, I added a /deep refinement on my JSON emitter for emitting nested blocks as objects. The /map refinement now only works on even sized blocks, instead of considering odd sized blocks an error:
http://red.esperconsultancy.nl/Red-JSON/dir?ci=tip
print to-JSON/map/deep [a 9 [b 9  c 42]]
["a", 9, {"b": 9, "c": 42}]
The JSON loader now supports string escaping, except Unicode escapes, which are implemented but are waiting for Red's LOAD to support char! syntax with parens: #"^(0000)"
probe load-JSON {"Escape: \"\\\/\n\r\t\b\f"}
{Escape: "\/^/^M^-^H^L}

amacleod
Very exciting stuff, Kaj!
Kaj
I updated Red on Try REBOL, so it has the latest PARSE fixes:
http://tryrebol.esperconsultancy.nl
It also includes my Tagged NetStrings converter: the to-TNetString and load-TNetString functions:
http://red.esperconsultancy.nl/Red-TNetStrings/dir?ci=tip&name=examples

Kaj
I added pretty printing to the JSON emitter:
http://red.esperconsultancy.nl/Red-JSON/dir?ci=tip
print to-JSON/map [a 9  b [9]  c 42]
{
    "a": 9,
    "b":
    [
        9
    ],
    "c": 42
}
There's now a /flat refinement that omits all spacing:
{"a":9,"b":[9],"c":42}
Kaj
print to-JSON/flat/map [a 9  b [9]  c 42]
amacleod
love it!

Kaj
I implemented string escaping in the JSON emitter:
http://red.esperconsultancy.nl/Red-JSON/dir?ci=tip
print to-JSON {Controls: "\/^(line)^M^(tab)^(back)^(page)^(null)}
"Controls: \"\\/\n\r\t\b\f\u0000"
The Unicode escapes \u aren't really implemented yet: they always output NULL, but they're only needed for obscure control characters.

Kaj
To simplify the TNetStrings and JSON converters for Red, I implemented found?, any-word!, any-word?, series?, any-string!, any-string?, any-block! and any-block? in common.red:
http://red.esperconsultancy.nl/Red-common/dir?ci=tip
Kaj
I upgraded Red on Try REBOL to the new 0.4.1 release, plus the objects branch merged in:
http://tryrebol.esperconsultancy.nl
So you can now use both objects and PARSE. Also, I included the JSON converter. Here's a nice example to try. It loads JSON from a web service API:
print headers: read "http://headers.jsontest.com"
print to-JSON/map probe load-JSON/keys headers
{
   "Host": "headers.jsontest.com",
   "User-Agent": "",
   "Accept": "*/*"
}
[Host "headers.jsontest.com" User-Agent "" Accept "*/*"]
{
    "Host": "headers.jsontest.com",
    "User-Agent": "",
    "Accept": "*/*"
}

Oldes
I've just submited pull request toPygments codebase including lexer for Red language:
https://bitbucket.org/birkenfeld/pygments-main/pull-request/263/red-language-red-langorg-lexer/diff
The colorized code than looks like:
http://duckbar.cz/example.red.html
Unfortunately there is still issue with recognition of REBOL source files if contain lit-words, so if you have Bitbucket account, maybe you coud vote for this issue:
https://bitbucket.org/birkenfeld/pygments-main/issue/934/r-language-versus-rebol-language-detection

Maxim
The Stone DB is consuming a lot of my time but its moving forward pretty nice...  current single thread (in-RAM) imports run at 10 million nodes per second  using an average node payload of 40 bytes (which is longer than the average I'd typically use).  The majority of the time is spent verifying internal dataset  integrity and memory copying.
it takes 3 seconds to basically grab all available process RAM (2GB ) and create 30 million data nodes.    1 million nodes takes 50ms on average, I'm getting pretty flat scaling so far, which is a very good sign .  note that the data is completely memory copied into the DB .  I'm not pointing to the original import data.
all of these benchmarks, are not even using a dedicated import function... this is like the worst case scenario for import.  its a dumb FOR loop using  a fully bounds checking single insert-node() function.... if I did an import loop which only does the bounds checking and stores counters in a loop, I likely can scale the import a lot.  
I'm now starting work on the higher level interfaces, basically creating database setup on the fly and hopefully by friday I should have the file i/o started.
maybe next week I'll start to see how I can create a native Stone DB interface for R3.
TomBon
nice tech you are doing there maxim. count me in for some big data tests. i never used graph DBs before but would like to give it a try
for a non scalable setup, suboptimal solved via simple key traversal stored into a nosql core currently.

Rebolek
I've put my old regex engine on github http://github.com/rebolek/r2e2 so anyone can improve on it.

Kaj
For the Red JSON converter, I implemented TO-HEX and LOAD-HEX in ANSI.red:
http://red.esperconsultancy.nl/Red-C-library/dir?ci=tip
TO-HEX is like REBOL, it has a /size refinement to specify the number of hex digits:
red>> to-hex 1023
== "000003FF"
red>> to-hex/size 1023 4
== "03FF"
red>> load-hex "3ff"
== 1023

Ladislav
I merged 0.4.40 (64-bit R3 for linux) to community. Merry Christmas.

Last message posted 113 weeks ago.