Human following is an important feature in various human-mobile-robot collaboration applications. Vision-based human following approaches employing visual servoing controls to achieve human following are commonly adopted. Such approaches, however, require to pre-define the desired human-following parameters and need to online extract features from the acquired images to calculate the human-following parameters which serve as the feedback of visual servoing controls. This paper proposes a novel visual servoing control using the non-vector space control theory, which makes the robot be able to personalize its desired human-following parameters as a desired image learnt from human-driven demonstrations. The approach provides an easy and intuitive way for humans to personalize mobile robots to complete human-following tasks in the manners that humans prefer. Experimental results demonstrate the effectiveness and advantage of the proposed approach.