Fix wrong line number when result file contains a new line
Metadata of 14.0 is NOT compatible with 13.0, so migration is required
MigrateV14.sh is the script to migrate db file.
Migration process is as follows
cp data/yanagishima.db data/yanagishima.db.bak
bin/migrateV14.sh data/yanagishima.db result
sqlite3 data/yanagishima.db
sqlite> alter table query rename to query_old;
sqlite> alter table query_v14 rename to query;
If you confirmed, drop table query_old
It takes about 10 hours if db file is more than 500MB and result file is about 1TB
vacuum and create index deq_index on query(datasource, engine, query_id) and create index deu_index on query(datasource, engine, user) may be necessary if yanagishima.db is huge
Fix infinite loop bug when /queryStatus returns 500
Add kill hive query feature if you use a kerberized hadoop cluster
Sort table name when you use Treeview
Copy publish url to clipboard but chrome user only due to Async Clipboard API
Improve code input performance especially when query result is huge
Upgrade ace editor
Add message if result count exceeds 500
Improve history tab logic when result file is removed
Add sort partition feature
Fix bug that 3 pane compare result display disappear
Don't create fluency instance every request due to performance improvement
Handle issue that presto doesn't support show paritions since 0.202
Add option to use webhdfs api when there are too many partitions
Version 12.0
Convert hive/presto query
Support graphviz to visualize presto explain result
Add tooltip to Set in History/Bookmark tab
Add new presto functions(0.196) to completion list
Fix bookmark bug
Fix presto authentication failed bug
Version 11.0
Fix timezone bug
Fix exponential notation bug
Support UTF-8 encoding for CSV
Version 10.0
Add timeline tab
Version 9.0
Pretty print for map data
Add left panel to compare query result
Support presto/hive authentication with user/password
If you want to use presto TLS, you need to execute keytool -import https://prestosql
.io/docs/current/security/tls.html
Search query history
Paging query history
Improve performance to write/read result file
Result file format of 9.0 is tsv, prior to 9.0 is json, so migration is required
MigrateV9.sh is the script to migrate result file.
If migration error occur, you can check it.
$ bin/migrateV9.sh result dest
...
processing /path/to/yanagishima-9.0/result/your-presto/20171010/20171010_072513_02895_xxvvj.json
error /path/to/yanagishima-9.0/result/your-presto/20171010/20171010_072513_02895_xxvvj.json
java.lang.RuntimeException: org.codehaus.jackson.JsonParseException: Unexpected end-of-input: expected close marker for ARRAY (from [Source: java.io.StringReader@e320068; line: 1, column: 0])
at [Source: java.io.StringReader@e320068; line: 1, column: 241]
at yanagishima.migration.MigrateV9.main(MigrateV9.java:59)
Caused by: org.codehaus.jackson.JsonParseException: Unexpected end-of-input: expected close marker for ARRAY (from [Source: java.io.StringReader@e320068; line: 1, column: 0])
at [Source: java.io.StringReader@e320068; line: 1, column: 241]
at org.codehaus.jackson.JsonParser._constructError(JsonParser.java:1433)
at org.codehaus.jackson.impl.JsonParserMinimalBase._reportError(JsonParserMinimalBase.java:521)
at org.codehaus.jackson.impl.JsonParserMinimalBase._reportInvalidEOF(JsonParserMinimalBase.java:454)
at org.codehaus.jackson.impl.JsonParserBase._handleEOF(JsonParserBase.java:473)
at org.codehaus.jackson.impl.ReaderBasedParser._skipWSOrEnd(ReaderBasedParser.java:1496)
at org.codehaus.jackson.impl.ReaderBasedParser.nextToken(ReaderBasedParser.java:368)
at org.codehaus.jackson.map.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:211)
at org.codehaus.jackson.map.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:194)
at org.codehaus.jackson.map.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:30)
at org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2732)
at org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1863)
at yanagishima.migration.MigrateV9.main(MigrateV9.java:56)
processing /path/to/yanagishima-9.0/result/your-presto/20171010/20171010_072517_02897_xxvvj.json
...
Version 8.0
Pretty print for json data
Store query history/bookmark to server side db, but default setting is to use local storage
Improve partition display
Metadata of 9.0 is NOT compatible with 7.0, so migration is required
Migration process is as follows
cp data/yanagishima.db data/yanagishima.db.bak
sqlite3 data/yanagishima.db
sqlite> create table query_new (datasource text, engine text, query_id text, fetch_result_time_string text, query_string text, user text, primary key(datasource, engine, query_id));
sqlite> insert into query_new select datasource, engine, query_id, fetch_result_time_string, query_string, null from query;
sqlite> alter table query rename to query_old;
sqlite> alter table query_new rename to query;
sqlite> create table publish_new (publish_id text, datasource text, engine text, query_id text, user text, primary key(publish_id));
sqlite> insert into publish_new select publish_id, datasource, engine, query_id, null from publish;
sqlite> alter table publish rename to publish_old;
sqlite> alter table publish_new rename to publish;
sqlite> create table bookmark_new (bookmark_id integer primary key autoincrement, datasource text, engine text, query text, title text, user text);
sqlite> insert into bookmark_new select bookmark_id, datasource, engine, query, title, null from bookmark;
sqlite> alter table bookmark rename to bookmark_old;
sqlite> alter table bookmark_new rename to bookmark;
If you confirmed, drop table query_old, publish_old, bookmark_old;
Version 7.0
Support hive on MapReduce(yanagishima executes set mapreduce.job.name=...)
Metadata of 7.0 is NOT compatible with 6.0, so migration is required
Migration process is as follows
cp data/yanagishima.db data/yanagishima.db.bak
sqlite3 data/yanagishima.db
sqlite> create table query_new (datasource text, engine text, query_id text, fetch_result_time_string text, query_string text, primary key(datasource, engine, query_id));
sqlite> insert into query_new select datasource, 'presto', query_id, fetch_result_time_string, query_string from query;
sqlite> alter table query rename to query_old;
sqlite> alter table query_new rename to query;
sqlite> create table publish_new (publish_id text, datasource text, engine text, query_id text, primary key(publish_id));
sqlite> insert into publish_new select publish_id, datasource, 'presto', query_id from publish;
sqlite> alter table publish rename to publish_old;
sqlite> alter table publish_new rename to publish;
sqlite> create table bookmark_new (bookmark_id integer primary key autoincrement, datasource text, engine text, query text, title text);
sqlite> insert into bookmark_new select bookmark_id, datasource, 'presto', query, title from bookmark;
sqlite> alter table bookmark rename to bookmark_old;
sqlite> alter table bookmark_new rename to bookmark;
If you confirmed, drop table query_old, publish_old, bookmark_old;
Version 6.0
Support bookmark title, so add title column to bookmark table
Metadata of 6.0 is NOT compatible with 5.0, so migration is required
Migration process is as follows
cp data/yanagishima.db data/yanagishima.db.bak
sqlite3 data/yanagishima.db
sqlite> create table if not exists bookmark_new (bookmark_id integer primary key autoincrement, datasource text, query text, title text);
sqlite> insert into bookmark_new select bookmark_id, datasource, query, null from bookmark;
sqlite> alter table bookmark rename to bookmark_old;
sqlite> alter table bookmark_new rename to bookmark;
If you confirmed, drop table bookmark_old;