Most of the current image retrieval systems use »one-shot» queries to a database to retrieve similar images. Typically a K-NN (nearest neighbor) kind of algorithm is used where the weights of the features that are used to represent images remain fixed (or manually tweaked by the user) in the computation of a given similarity metric. However, neither all of the features are equally important for a given query nor a similarity metric is optimal for all kinds of images in a database. The manual adjustment of these weights and the selection of similarity metric are exhausting. Moreover, they require a very sophisticated user. The authors present a novel image retrieval system that continuously learns the weights of features and selects an appropriate similarity metric based on the user's feedback given as positive or negative image examples. Experimental results are presented that provide the objective evaluation of learning behavior of the system for image retrieval.