AltME: Databases

Messages

afsanehsamim
i compare each field of tables with each other like :insert db["select data.oneone,data1.oneone from data LEFT JOIN data1 ON data.oneone=data1.oneone"]
results: copy db
probe results
insert db["select data.onetwo,data1.onetwo from data LEFT JOIN data1 ON data.onetwo=data1.onetwo"]
results: copy db
probe results
insert db["select data.onethree,data1.onethree from data LEFT JOIN data1 ON data.onethree=data1.onethree"]
results: copy db
probe results .....
i got result
i need codes for showing message to user ,it mean after each joining ,it should show user that  value  is correct or no
afsanehsamim
guys ! could you plz tell me after comparing values of two tables how can we show the output on web page?
after writing queries :foreach row read/custom mysql://root@localhost/test ["select data.oneone,data1.oneone from data LEFT JOIN data1 ON data.oneone=data1.oneone"] [print row]
foreach row read/custom mysql://root@localhost/test ["select data.onetwo,data1.onetwo from data LEFT JOIN data1 ON data.onetwo=data1.onetwo"] [print row] ....
i got this results:c c
a none
t t
a none
e none
r none
o none
a none
now how can i write query for everyvalues which are same and print correct message on web page?

afsanehsamim
hey guys... i have just 2days time for my project ! could you help me?
i could not do the last step ... i should show result of comparing values on web page

TomBon
a quick update on elasticsearch.
Currently I have reached 2TB datasize (~85M documents) on a single node.
Queries now starting to slow down but the system is very stable even under
heavy load. While queries in average took between 50-250ms against a
dataset around 1TB the same queries are now in a range between 900-1500 ms.
The average allocated java heap is around 9GB which is nearly 100% of the
max heap size by a 15 shards and 0 replicas setting.
elasticsearch looks like a very good candidate for handling big data with
a need for 'near realtime' analysis. Classical RDBMS like mysql and postgresql
where grilled at around 150-500GB. Another tested candidate was MongoDB
which was great too but since it stores all metadata and fields uncompressed
the waste of diskspace was ridiculous high. Furthermore query execution times
differs unexpectable without any known reason by factor 3.
Tokyo Cabinet started fine but around 1TB I have noticed file integrity problems
which leads into endless restoring/repairing procedures. Adding sharding logic
by coding an additional layer wasn't very motivating but could solve this issue.
Within the next six months the datasize should reached the 100TB mark.
Would be interesting to see how elasticsearch will scale and how many
nodes are nessesary to handle this efficiently.
Maxim
when you talk about "documents" what type of documents are they?
Gregg
Thanks for the info Tomas.
TomBon
crawled html/mime embedded documents/images etc. as plain compressed source (avg. 25kb) and 14 searchable metafields (ngram) to train different NN types for pattern recognition.
Maxim
thanks  :-)

MaxV
I have a problem with RebDB: how works db-select/group?
Example:
>> db-select/where/group/count [ID title post date]  archive  [find post "t" ] [ID]
** User Error: Invalid number of group by columns
** Near: to error! :value
Endo
Don't you need to use aggregate functions when you grouping?
* when you use grouping.
Scot
I use the sql dialect like this:
sql [select count [ID title post date] from archive group by [ID title post] where [find post "t"]]
The trick with this particular query is the that the "count" selector must have exactly one more column than the "group by" selector.  The first three elements [ID title post] are used to sort the output and the last element [date] is counted.
output will be organized:
    ID  title   post    count
I would like to be able to include other columns in the output that are not part of the grouping or count, but I haven't figured out how to do this in RebDB.  I have used a parse grammar on the output to achieve the desired result.
I would also like to query the results of a query, which I haven't figured out how to do so without creating and committing a new database.  So I have  used a parse grammar to merge two queries.

Pavel
SQLite version 4 announced/proposed. The default built-in storage engine is a log-structured merge database instead of B-tree in SQlite3. As far as I understand the docs This store could be usable standalone or use SQL frontend. Google to SQLite4.

Last message posted 347 weeks ago.