AltME: Announce

Messages

Gregg
My implementation of OFFSET? in Red is exactly the same as Kaj's. I didn't thinl about adding that check either, but I think I have used it on different series at least once in the past, tracking progress in two different queues.
Kaj
By request, I added a /deep refinement on my JSON emitter for emitting nested blocks as objects. The /map refinement now only works on even sized blocks, instead of considering odd sized blocks an error:
http://red.esperconsultancy.nl/Red-JSON/dir?ci=tip
print to-JSON/map/deep [a 9 [b 9  c 42]]
["a", 9, {"b": 9, "c": 42}]
The JSON loader now supports string escaping, except Unicode escapes, which are implemented but are waiting for Red's LOAD to support char! syntax with parens: #"^(0000)"
probe load-JSON {"Escape: \"\\\/\n\r\t\b\f"}
{Escape: "\/^/^M^-^H^L}

amacleod
Very exciting stuff, Kaj!
Kaj
I updated Red on Try REBOL, so it has the latest PARSE fixes:
http://tryrebol.esperconsultancy.nl
It also includes my Tagged NetStrings converter: the to-TNetString and load-TNetString functions:
http://red.esperconsultancy.nl/Red-TNetStrings/dir?ci=tip&name=examples

Kaj
I added pretty printing to the JSON emitter:
http://red.esperconsultancy.nl/Red-JSON/dir?ci=tip
print to-JSON/map [a 9  b [9]  c 42]
{
    "a": 9,
    "b":
    [
        9
    ],
    "c": 42
}
There's now a /flat refinement that omits all spacing:
{"a":9,"b":[9],"c":42}
Kaj
print to-JSON/flat/map [a 9  b [9]  c 42]
amacleod
love it!

Kaj
I implemented string escaping in the JSON emitter:
http://red.esperconsultancy.nl/Red-JSON/dir?ci=tip
print to-JSON {Controls: "\/^(line)^M^(tab)^(back)^(page)^(null)}
"Controls: \"\\/\n\r\t\b\f\u0000"
The Unicode escapes \u aren't really implemented yet: they always output NULL, but they're only needed for obscure control characters.

Kaj
To simplify the TNetStrings and JSON converters for Red, I implemented found?, any-word!, any-word?, series?, any-string!, any-string?, any-block! and any-block? in common.red:
http://red.esperconsultancy.nl/Red-common/dir?ci=tip
Kaj
I upgraded Red on Try REBOL to the new 0.4.1 release, plus the objects branch merged in:
http://tryrebol.esperconsultancy.nl
So you can now use both objects and PARSE. Also, I included the JSON converter. Here's a nice example to try. It loads JSON from a web service API:
print headers: read "http://headers.jsontest.com"
print to-JSON/map probe load-JSON/keys headers
{
   "Host": "headers.jsontest.com",
   "User-Agent": "",
   "Accept": "*/*"
}
[Host "headers.jsontest.com" User-Agent "" Accept "*/*"]
{
    "Host": "headers.jsontest.com",
    "User-Agent": "",
    "Accept": "*/*"
}

Oldes
I've just submited pull request toPygments codebase including lexer for Red language:
https://bitbucket.org/birkenfeld/pygments-main/pull-request/263/red-language-red-langorg-lexer/diff
The colorized code than looks like:
http://duckbar.cz/example.red.html
Unfortunately there is still issue with recognition of REBOL source files if contain lit-words, so if you have Bitbucket account, maybe you coud vote for this issue:
https://bitbucket.org/birkenfeld/pygments-main/issue/934/r-language-versus-rebol-language-detection

Maxim
The Stone DB is consuming a lot of my time but its moving forward pretty nice...  current single thread (in-RAM) imports run at 10 million nodes per second  using an average node payload of 40 bytes (which is longer than the average I'd typically use).  The majority of the time is spent verifying internal dataset  integrity and memory copying.
it takes 3 seconds to basically grab all available process RAM (2GB ) and create 30 million data nodes.    1 million nodes takes 50ms on average, I'm getting pretty flat scaling so far, which is a very good sign .  note that the data is completely memory copied into the DB .  I'm not pointing to the original import data.
all of these benchmarks, are not even using a dedicated import function... this is like the worst case scenario for import.  its a dumb FOR loop using  a fully bounds checking single insert-node() function.... if I did an import loop which only does the bounds checking and stores counters in a loop, I likely can scale the import a lot.  
I'm now starting work on the higher level interfaces, basically creating database setup on the fly and hopefully by friday I should have the file i/o started.
maybe next week I'll start to see how I can create a native Stone DB interface for R3.
TomBon
nice tech you are doing there maxim. count me in for some big data tests. i never used graph DBs before but would like to give it a try
for a non scalable setup, suboptimal solved via simple key traversal stored into a nosql core currently.

Rebolek
I've put my old regex engine on github http://github.com/rebolek/r2e2 so anyone can improve on it.

Kaj
For the Red JSON converter, I implemented TO-HEX and LOAD-HEX in ANSI.red:
http://red.esperconsultancy.nl/Red-C-library/dir?ci=tip
TO-HEX is like REBOL, it has a /size refinement to specify the number of hex digits:
red>> to-hex 1023
== "000003FF"
red>> to-hex/size 1023 4
== "03FF"
red>> load-hex "3ff"
== 1023

Ladislav
I merged 0.4.40 (64-bit R3 for linux) to community. Merry Christmas.
Maxim
does this include any view code?

Kaj
I made many improvements to the TNetStrings and JSON converters for Red:
http://red.esperconsultancy.nl/Red-TNetStrings/dir?ci=tip
http://red.esperconsultancy.nl/Red-JSON/dir?ci=tip
- Floating point numbers are now parsed and loaded as file! types, so external data with floats can at least be loaded and the numbers can be detected, so they could be processed further by your own functions.
red>> load-JSON "6.28"
== %6.28
- char! type is now more explicitly supported, in the sense that single character strings will be loaded as char! so they are more efficient.
red>> load-JSON {["a", 9, "bc", 42]}
== [#"a" 9 "bc" 42]
- object! type is now supported, so it becomes easier to emit TNetStrings with nested dictionaries and JSON data with nested objects. The converters can still (and need to) be compiled: they use the interpreter only very sparingly for objects support.
red>> load-JSON/objects {{"a": 9, "bc": 42}}
== make object! [
    a: 9
    bc: 42
]
red>> print to-JSON context [a: 9 b: 42]
{
    "a": 9,
    "b": 42
}
- bitset! type is now supported in the emitter. Small bitsets that fit in bytes are emitted as character lists.
red>> print to-JSON/flat s: charset [#"0" - #"9"]
["0","1","2","3","4","5","6","7","8","9"]
Complemented bitsets are not explicitly supported because they would be too large.
red>> print to-JSON complement s
"make bitset! [not #{000000000000FFC0}]"
Larger bitsets are emitted as integer lists.
red>> print to-JSON charset [100 1000]
[
    100,
    1000
]
- All Red data types can now be emitted. Not explicitly supported types are FORMed.
- Several new refinement options, in particular for object support.
red>> load-JSON/values {["#issue", "%file", "{string}"]}
== [#issue %file "string"]
Loading JSON objects and TNetStrings dictionaries still defaults to generating Red block!s.
red>> load-JSON {{"a": 9, "bc": 42}}
== [#"a" 9 "bc" 42]
red>> load-JSON/keys {{"a": 9, "bc": 42}}
== [a 9 bc 42]
- More efficiency optimisations. The converters use a minimum of memory.
- Unicode escapes in JSON strings are now fully supported.
red>> load-JSON {"Escapes: \"\\\/\n\r\t\b\f\u0020\u0000"}
== {Escapes: "\/^/^M^-^H^L ^@}
red>> print to-JSON {Controls: "\/^/^M^-^H^L^@}
"Controls: \"\\/\n\r\t\b\f\u0000"
red>> print to-JSON make char! 1
"\u0001"
- The JSON converter now implements the full specification on json.org except escaped UTF-16 surrogate pairs. There is little reason for them to occur in JSON data.
Kaj
The JSON converter is still smaller than the official R2 implementation. It's now larger than the R3 implementation, but has more features. It's still an order of magnitude smaller than most JSON implementations in other languages.

Maxim
StoneDB is starting to take shape.  I got the preliminary disk storage prototype finished today.  I can't give factual speed benchmarks since for now I've got no time to do extensive testing... but it seems to be able to store at least 500000 nodes a second (@about 14MB/s), which is pretty decent for a prototype using default C disk writing functions and absolutely no regards for disk i/o profiling.  this is even more acceptible considering its running on a lame notebook disk.  (I should have a SSD after the holidays, so I'll be able to compare :-)
with the current architecture, I should be able to read any cell directly from disk so query set can be larger than physical RAM.
If all goes well, I should have persistent read/write access to the DB's file data done by the time I go to bed tonight   .....   yay!
After that... cell linking which will require a different variable length dataset driver.  This new one will allow perpetual appending without any need to copy memory  :-)

Last message posted 110 weeks ago.