Delivered-To: garrigue at math.nagoya-u.ac.jp Delivered-To: lablgtk at yquem.inria.fr Subject: Re: [Lablgtk] entry completion for hundreds of thousands of items Mime-Version: 1.0 (Apple Message framework v1081) Content-Type: text/plain; charset=us-ascii From: Yoann Padioleau In-Reply-To: <9CA765FE-8942-45F2-8203-DE219559E1FA at wanadoo.fr> Date: Sat, 4 Sep 2010 09:20:11 -0700 Content-Transfer-Encoding: quoted-printable Message-Id: References: <9CA765FE-8942-45F2-8203-DE219559E1FA@wanadoo.fr> To: Yoann Padioleau Cc: lablgtk at yquem.inria.fr Status: U On Sep 4, 2010, at 8:34 AM, Yoann Padioleau wrote: >=20 >=20 > On Sep 4, 2010, at 3:37 AM, Adrien wrote: >=20 >>=20 >> Hi, >>=20 >> I've only taken a very quick look at your code but I think that what >> you need is described in the treeview tutorial[1]. >=20 > Your link explains that I should detach my model from the view but > I use the entry/entry-completion widget, not the treeview, and in my = code > I build a model without any view/entry attached anyway to and it is = already slow. Also > I don't see any method to disable sorting on GTree.list_store. Hmm I didn't see the advice on using a custom model at the first = reading. >=20 >>=20 >> Also, I've experienced such issues with pretty short lists or trees. >> No need to have 1000 elements, something like 50 or 100 may be enough >> to notice a slowdown. >>=20 >> Hope this helps. Yes, the custom model should help I think. I've tried inserting 100000 = elements using lablgtk/examples/custom_list_generic.ml and it is way faster. I = just have to adapt the example to use instead the entry completion widget. >>=20 >> [1] = http://plus.kaist.ac.kr/~shoh/ocaml/lablgtk2/treeview-tutorial/ch03s03.htm= l#sec-treestore-adding-many-rows >>=20 >> -- >> Adrien Nader >>=20 >>=20 >> On 04/09/2010, yoann padioleau wrote: >>> Hi, >>>=20 >>> I want to provide some completion for text where the corpus of items = is >>> above 100000. I've tried to build a GTree.list_store with those = 100000 items >>> but it takes a too long time to build the model. I've then tried to >>> split those 100 000 iterms >>> in chunk of 1000 based on common prefix. Then as the user start to >>> type a string, I build a model >>> with only the relevant 1000 items. But even building a model with = only >>> 1000 items takes more than 2 seconds >>> which makes the whole application look slow. Is there a way to >>> accelerate the insertion of items >>> in a list_store ? >>>=20 >>> I've attached my modification of src/examples/entrycompletion.ml >>> showing how slow it is. >>>=20 >>=20 >> _______________________________________________ >> Lablgtk mailing list >> Lablgtk@yquem.inria.fr >> http://yquem.inria.fr/cgi-bin/mailman/listinfo/lablgtk >>=20 >=20 >=20 >=20 > _______________________________________________ > Lablgtk mailing list > Lablgtk@yquem.inria.fr > http://yquem.inria.fr/cgi-bin/mailman/listinfo/lablgtk >=20 _______________________________________________ Lablgtk mailing list Lablgtk@yquem.inria.fr http://yquem.inria.fr/cgi-bin/mailman/listinfo/lablgtk