Abstract

A viewpoint of a 3D object is the position from which we observe the object. A viewpoint always highlights some 3D parts and discards other parts of an object. Here, we define a good viewpoint as offering a relevant view of the object: a view that best showcases the object and that is the most representative of the object. Best view selection plays an essential role in many computer vision and virtual reality applications. In this paper, given a model and a particular viewpoint, we want to quantify its relevance -not aesthetics. We propose a geometric method for selecting the most relevant viewpoint for a 3D object by combining visibility and view-dependent saliency. Evaluating the quality of an estimated best viewpoint is a challenge. Thus, we propose an evaluation protocol that considers two different and complementary solutions: a user study with more than 200 participants to collect human preferences and an analysis of image dataset picturing objects of interest. This evaluation highlights the correlation between our method and human preferences. A quantitative comparison demonstrates the efficiency of our approach over reference methods.

Point of View Scoring

We propose a relevance measure to automatically select the best viewpoint of a 3D object based on view-dependent 3D saliency. The score for a particular viewpoint (pov) is defined as:


Score(pov) = S(pov) + Se(pov) + Sa(pov)

Score Parameters:

Visibility S(pov): Quantifies the proportion of the model’s total 3D surface visible from the given viewpoint. It is computed as the ratio of the visible 3D surface to the total 3D surface.


Eye Surface Visibility Se(pov): Measures the visibility of the eye surface (if present) from the given viewpoint. It is the ratio of the visible 3D surface of the eyes to their total surface.


Saliency of Visible Vertices Sa(pov): Represents the view-dependent 3D saliency of visible vertices. Computed as the sum of saliency values Si(v) of visible vertices, weighted by an angle-based function f(αv):

Sa(pov) = ∑ Si(v) · f(αv), for all v ∈ V
where V is the set of visible vertices from viewpoint pov.

Saliency Parameters:

Intrinsic Saliency Method (Si): Five different methods were tested, including those by Lee (2005), Song (2014), Tasse (2015), Leifman (2016), and Limper (2016). Limper’s entropy-based multi-scale method gave the best results.


Angle Function (f): Five tested functions: cos(αv), √cos(αv), 1−cos(αv), 1−√cos(αv), and 0.5 + (1−√cos(αv))/2. The cosine function was most effective for prioritizing vertices facing the camera. Others that highlight contours may favor accidental views.

3D Model Viewer

Select a model to visualize the best viewpoint selection:

Select a parameter to visualize:

Loading 3D Model...