je.st
news
Tag: object
Build an object detection model with Amazon Rekognition custom labels and Roboflow
2021-08-10 04:01:18| The Webmail Blog
Build an object detection model with Amazon Rekognition custom labels and Roboflow nellmarie.colman Mon, 08/09/2021 - 21:01 Computer vision technology is making a difference in every industry from ensuring hard hat compliance at construction sites, to identifying plants vs. weeds for targeted herbicide use, to identifying and counting cell populations in laboratory experiments. By training computers to interpret the visual world as well as or better then humans can, we can quickly identify and classify objects and automatically take actions based on that information. This makes it possible to improve workplace safety, protect our environment and accelerate innovation, across industries. Computer vision problem types Although computer vision output is relatively simple (This person is or is not wearing a hard hat correctly at the construction site), training the computer vision backend can be challenging. It must be able to accurately identify and organize objects according to multiple factors, such as: Classification: This is a person. Classification + Localization: This is a person at a construction site. Object detection: There are two people, plus one hard hat, at the construction site. Semantic segmentation: There are two people, plus one hard hat, and this is the shape of each. Keypoint detection and pose estimation: There are two people. One is wearing a hard hat, but it is not positioned correctly. The other is not wearing a hard hat at all. To get the right output, you need the right input. And that generally requires seven important steps. Lets walk through those. The seven steps of training an object detection model from scratch 1. Defining the problem Start by defining exactly what you want to do. What is your use case? This will help guide each of the steps that follow. 2. Data collection Next, youll need to collect photos and videos that are representative of the problem youre trying to solve for. For example, if youre aiming to build a hard hat detector, youll need to collect images of multiple hard hat types, as well as settings where people may be wearing hard hats. Remember to provide images in a variety of conditions: bright vs. dim, indoor vs. outdoor, sunny vs. rainy, people alone vs. in a group, etc. The better the variety, the better your model can learn. 3. Labeling There are dozens of different image annotation formats with image labels coming in all shapes and sizes. There are popular annotations like Pascal VOC, COCO JSON and YOLO TXT. But each model framework expects a certain type of annotation. For example, TensorFlow expects TF records, and the recognition service expects a manifest.json file thats specific to AWS annotation. So, above all, make sure that your images are labeled in a consistent format that your model framework requires. And use a tool like Amazon SageMaker Ground Truth to streamline the process. Some labeling tips to keep in mind: Label around the entirety of the object. Its best to include a little bit of non-object buffer than to exclude a portion of the object within a rectangular label. Your model will understand edges far better this way. Label hidden/occluded objects entirely. If an object is out of view because another object is in front of it, label the object anyways, as though you could see it in its entirety. Your model will begin to understand the true bounds of objects this way. For objects partially out of frame, generally label them. This depends on the problem youre trying to solve for. But in general, even a partial object is still an object to be labeled. 4. Data pre-processing Now is the time to ensure your data is formatted correctly for your model resizing, re-orienting, making color corrections, etc., as needed. For example, if your model requires a square aspect ratio, you should format your photos/videos to fill a square space perhaps using black or white pixels to fill the empty space. You will also want to remove EXIF / metadata from your images, since that can sometimes confuse the model. Or if you want the model to be insensitive to color (e.g., it doesnt matter what color the hard hat is), you can format your images/video to be grayscale, to eliminate that factor. 5. Data augmentation Next, you should apply different formatting to your existing content, to expose your model to a wider array of training examples. By flipping, rotating, distorting, blurring, and adjusting the color for your images, you are, in effect, creating new data. So, instead of having to actually take photos of people wearing hard hats in different lighting conditions, you can use augmentation to simulate brighter or dimmer room lighting. You can also train your model to be insensitive to occlusion, so that it can still detect an object even if becomes blocked by another object. You can do this by adding black box cutouts to your photos/videos, to help train your model. 6. Training the model To train your model, youll be using a tool like Amazon Rekognition Custom Labels, which will process the inputs youve created during the first five steps. Youll need to decide, though, whats most important for your use case: accuracy, speed, or model size? Generally, these factors trade off with one another. 7. Inference Now its time to actually put your model into production. This will vary depending on the type of deployment. For example, will you be using embedded devices such as cameras on a factory line? Or will this be a server-side deployment with APIs? See the process in action Last year we recorded a webinar, where we walked through how to use Amazon Rekognition Custom Labels with Roboflow to deploy a system that can detect whether or not people are wearing face masks. You can apply the same steps to your own object detection models, to serve your own use cases. Watch the webinar on-demand to follow the end-to-end process of creating a working object detection model. Build an object detection model with Amazon Rekognition custom labels and RoboflowLearn the seven steps to training an object detection model from scratch, and check out our video walk-thru. Leverage AI and machine learning in your business./lp/ai-ml-consultationFree strategy session
Tags: model
build
custom
amazon
Xpeng announces next-gen autonomous driving architecture; lidar for enhanced object recognition
2020-11-24 11:55:45| Green Car Congress
Tags: driving
object
architecture
enhanced
Astronomer NearEarth Object NEO Observer
2020-06-15 17:13:14| Space-careers.com Jobs RSS
Position Reference 068 RHEA Group is a growing international company, focusing on providing innovative, marketready solutions and services in our key sectors of Space and Security for both commercial and institutional customers. We employ over 500 staff working across 10 different countries. We work with distinguished clients such as the European Space Agency, EUMETSAT, NATO, European Commission, Canadian Government and national space agencies. When you work for RHEA, you will have the opportunity to work alongside some of the best talented minds and experts in our industries, either working at our clients sites on some of the most exciting space missions or on cuttingedge projects in security, concurrent design, data and ground systems within our own offices. To attract the best candidates, RHEA offers our employees competitive remuneration packages, unique career opportunities, individualised training and development programmes and local relocation support to take the stress out moving to another country or city. We are recruiting now. We understand your concerns during this period of a global pandemic and we will work with you, at your pace ensuring your questions are answered and maximum flexibility is offered. We are currently looking for an Astronomer NearEarth Object NEO Observer to work in the beautiful city of Frascati, Italy. You will support and coordinate observational activities of the NEOCC, with a particular focus on scientific data analysis andor instrumentationrelated activities. You will be working at ESAs NEO Coordination Centre NEOCC, located inside ESAs ESRIN centre in Frascati, Italy. Tasks and Activities The scope of work will include Support the testing, integration and commissioning of the Flyeye telescope. In particular review and update documentation, analyse test image data, preparation of test and commissioning plans, participation in factory and site acceptance tests. In cooperation with the Space Debris team, use the two TestBed Telescopes for survey and followup observations. Set up a nearSun survey for the TestBed Telescope 2, which is planned to become operational in Chile in 2020. Remote scheduling the Flyeye telescope and ensure proper data analysis. Oncall support for the Flyeye operations. Preparation and coordination of astrometric observations of NEOs for all collaborating telescopes. Preparation of an observing plan for faint NEOs from the risk list for ESOs Very Large Telescope, coordination with ESO to get the observations done. Perform astrometric measurements when needed. Coordinate with other observing programmes, e.g. of the US, or the EUfunded NEOROCKS activity. Support the preparation of physical properties observations of NEOs, when needed. Support all observationrelated discussions in the Planetary Defence office. Inform the observations manager of any issues. Present observational successes at conferences and workshops. The activities will mostly be done during nominal working hours however, you are expected to do night andor weekend shifts, such as Typically, 4 nights per months are dedicated to the use of ESAs OGS telescope. At least once per month urgent observations require a partial night shift. Part of the Flyeye testing, integration, commissioning andor calibration campaigns will be carried out at night. Oncall service for the Flyeye operations you may be contacted in case of emergency, e.g. you may be required to restart or update the foreseen schedule. Most of the observations are executed remotely. However, you shall also be ready to travel to participate to relevant meetings and observational campaigns if required. Skills and Experience The following skills and experience are mandatory Degree in astronomy, astrophysics, physics or closely related fields. Expertise in optical detector system integration, test and validation. Expertise in data reduction and calibration of detector system. Familiarity in using telescopes. How to Apply Looking to take your career to the next level? Interested applicants should submit their CV and Cover Letter to RHEAs Recruitment team at careersrheagroup.com no later than 02072020. Preference will be given to candidates eligible for an EU or national personal security clearance at the level of CONFIDENTIAL or above.
Tags: object
neo
observer
astronomer
Sites : [1]