T 1066/22 (Location-based Augmented Reality/META) 28-10-2024
Download and more information:
SYSTEMS AND METHODS FOR PROVIDING AUGMENTED REALITY OVERLAYS
I. The appeal lies from the decision of the Examining Division to refuse the application.
II. With the statement of grounds of appeal the Appellant requested that the decision of the Examining Division be set aside and that a patent be granted on the basis of the main request or of one of two auxiliary requests. The main and first auxiliary requests are identical to those underlying the decision under appeal. The second auxiliary request was filed with the statement of grounds of appeal.
III. The requests underlying the decision were refused for lack of clarity and lack of inventive step. In its reasoning the Examining Division referred to document:
D4: MIN WEIQING ET AL: "A survey on context-aware mobile visual recognition".
IV. In a communication accompanying a summons to oral proceedings, the Board provided its preliminary opinion, which was, inter alia, that all requests lacked inventive step.
V. The Appellant informed the Board that it would not attend the oral proceedings and requested "a decision according to the state of the file", without making any other submission.
VI. The Board cancelled the oral proceedings.
VII. Claim 1 of the main request defines:
A computer-implemented method comprising:
receiving, by a computing system, user location information indicative of a location of a user;
identifying (402), by the computing system, one or more objects depicted in a camera view of a camera application displayed on a display based on a first object recognition machine learning model, wherein the first object recognition machine learning model is selected from a plurality of object recognition machine learning models based on the user location information;
determining (404), by the computing system, an augmented reality overlay based on the one or more objects identified in the camera view; and
modifying (406), by the computing system, the camera view based on the augmented reality overlay;
wherein an object-based augmented reality overlay module (104) provides augmented reality overlays based on objects detected in the camera view using automated object recognition techniques, wherein machine learning models are trained to identify objects depicted in the camera view,
wherein location information is utilized to assist in determining what objects are being depicted in the camera view, wherein, if located in a particular location, objects that are specific to other locations are removed from consideration,
wherein a plurality of machine learning models is trained, where each machine learning model is associated with a particular location such that each machine learning model is trained to identify objects associated with the particular location,
wherein the method is characterized in that: based on the location information, machine learning models are downloaded such that object recognition is performed locally on the user's mobile device using the downloaded machine learning models, wherein
as the location changes, machine learning models associated with previous location information are removed and replaced with machine learning models associated with the current location.
VIII. Claim 1 of the first auxiliary request defines:
A computer-implemented method comprising:
receiving, by a computing system, user location information indicative of a location of a user;
identifying (402), by the computing system, one or more objects depicted in a camera view of a camera application displayed on a display based on a first object recognition machine learning model, wherein the first object recognition machine learning model is selected from a plurality of object recognition machine learning models based on the user location information;
determining (404), by the computing system, an augmented reality overlay based on the one or more objects identified in the camera view; and
modifying (406), by the computing system, the camera view based on the augmented reality overlay;
wherein an object-based augmented reality overlay module (104) provides augmented reality overlays based on objects detected in the camera view using automated object recognition techniques, wherein object recognition machine learning models are trained to identify objects depicted in the camera view,
wherein location information is utilized to assist in determining what objects are being depicted in the camera view, wherein, if located in a particular location, objects that are located in other locations that are far away from the user's current location are removed from consideration in determining what objects are being depicted in the camera view,
wherein a plurality of object recognition machine learning models is trained, where each object recognition machine learning model is associated with a particular location such that each object recognition machine learning model is trained to identify objects associated with the particular location,
wherein the method is characterized in that : based on the location information, object recognition machine learning models are downloaded such that object recognition is performed locally on the user's mobile device using the downloaded object recognition machine learning models, wherein as the location changes, object recognition machine learning models associated with previous location information are removed and replaced with object recognition machine learning models associated with the current location.
IX. Claim 1 of the second auxiliary request defines:
A computer-implemented method comprising:
receiving, by a computing system, user location information indicative of a location of a user;
identifying (402), by the computing system, one or more objects depicted in a camera view of a camera application displayed on a display based on a first object recognition machine learning model, wherein the first object recognition machine learning model is selected from a plurality of object recognition machine learning models based on the user location information;
determining (404), by the computing system, an augmented reality overlay based on the one or more objects identified in the camera view; and
modifying (406), by the computing system, the camera view based on the augmented reality overlay;
wherein an object-based augmented reality overlay module (104) provides augmented reality overlays based on objects detected in the camera view using automated object recognition techniques, wherein object recognition machine learning models are trained to identify objects depicted in the camera view,
wherein location information is utilized to assist in determining what objects are being depicted in the camera view, wherein, if located in a particular location, objects that are specific to other locations are removed from consideration in determining what objects are being depicted in the camera view,
wherein a plurality of object recognition machine learning models is trained, where each object recognition machine learning model is associated with a particular location such that each object recognition machine learning model is trained to identify objects associated with the particular location,
wherein the method is characterized in that :
based on the location information, object recognition machine learning models are downloaded such that object recognition is performed locally on the user's mobile device using the downloaded object recognition machine learning models, wherein
as the location changes, object recognition machine learning models associated with previous location information are removed and replaced with object recognition machine learning models associated with the current location.
The application
1. The application relates to methods and systems of providing augmented reality overlays (see the application, as originally filed, paragraph 1), in the context of social networks and content creation for posting on such networks (paragraph 2).
1.1 Machine learning models based on geographic location are used to recognize objects in the users' camera view and recommend, or apply, appropriate overlays. When the user changes location, the machine learning models are replaced accordingly (paragraph 41).
Main request: inventive step
2. The Examining Division started its inventive step analysis from D4, which is a survey on the topic of context-aware mobile visual recognition.
2.1 In its feature comparison (decision, point 4.2), the Examining Division seems to have referred to different methods summarised in D4 (figure 3, and sections 3.1 and 6.5). It also quoted a passage which does not correspond to the referred sections, but rather to a description of figure 1 (as also indicated by the Appellant in its statement of grounds of appeal, pages 8 and 9), completed with another passage below figure 2 ("... if the location information is available, the system can significantly reduce the search scope for the captured object").
2.2 It is however not disputed by the Appellant (see the statement of grounds of appeal page 9) that D4 discloses (in figures 1 and 2) the use of location-based object recognizers and the use of recognition results for augmented reality, and that the differences to D1 are those defined in the characterising portion of the claim, namely:
"based on the location information, machine learning models are downloaded such that object recognition is performed locally on the user's mobile device using the downloaded machine learning models,
wherein as the location changes, machine learning models associated with previous location information are removed and replaced with machine learning models associated with the current location".
In D4, the location-based classifiers reside on the server (see figure 1).
3. According to the Examining Division (point 4.2 in the decision) the objective technical problem is to allow an offline use of the recognition methods. This requires the download of the location-based object classifiers (point 4.2, see also point 6). The replacing of the older ones is then, in the Examining Division's view, "standard housekeeping".
4. The Appellant disputed the formulation of the objective technical problem. It was too broadly formulated and disregarded the synergistic effect of providing fast and efficient overlays, which was described in the application (statement of grounds of appeal, page 7, middle paragraph, and page 9). The problem should rather be formulated as follows: "to provide fast and efficient augmented reality overlays while a user is offline in a specific location" (statement of grounds of appeal page 10).
4.1 The Appellant also argued (paragraph bridging pages 7 and 8) that D4 did not acknowledge any bandwidth issues, and it assumed a permanent connection. Offline recognition was not addressed by D4. For that reason, as the Board understands the argument, the problem was not obvious from D4. Also, the claimed solution was not mentioned in the cited prior art, so it was not obvious (statement of grounds of appeal page 7, penultimate paragraph, and page 10).
5. The Board finds the analysis of the Examining Division to be convincing.
5.1 Being able to use features/applications offline is a desire that users can be assumed (actually, are known) to have had at the relevant priority date. Accordingly, it is a reasonable objective technical problem for the skilled person to address, whether D4 mentions it or not. This requires, necessarily, the download of the location based object classifiers to the device.
5.2 Removing data which is no longer needed appears then as an obvious option, for instance in order to save storage space on the device.
6. Regarding the argument of the Appellant that this ana lysis overlooks the synergy between the distinguishing features, the Board remarks the following.
6.1 Article 56 EPC states that an invention shall be considered as involving an inventive step if, having regard to the state of the art, it is not obvious to a person skilled in the art.
6.2 As argued above and in agreement with the Examining Division, the Board considers that the invention is the obvious solution of a technical problem that the skilled person would have considered in view of the prior art. The mere fact that the considered technical problem does not reflect all effects achieved by the invention cannot disprove the conclusion that the invention was obvious over the prior art.
7. The Board therefore concludes that claim 1 of the main request lacks inventive step.
Auxiliary requests
8. The auxiliary requests contain amendments directed to the objections of the Examining Division under Article 84 EPC (see statement of grounds of appeal, sections F to I). They have no impact on the Board's opinion above as to inventive step.
For these reasons it is decided that:
The appeal is dismissed.