990 resultados para object orientation processing
Resumo:
Les logiciels actuels sont de grandes tailles, complexes et critiques. Le besoin de qualité exige beaucoup de tests, ce qui consomme de grandes quantités de ressources durant le développement et la maintenance de ces systèmes. Différentes techniques permettent de réduire les coûts liés aux activités de test. Notre travail s’inscrit dans ce cadre, est a pour objectif d’orienter l’effort de test vers les composants logiciels les plus à risque à l’aide de certains attributs du code source. À travers plusieurs démarches empiriques menées sur de grands logiciels open source, développés avec la technologie orientée objet, nous avons identifié et étudié les métriques qui caractérisent l’effort de test unitaire sous certains angles. Nous avons aussi étudié les liens entre cet effort de test et les métriques des classes logicielles en incluant les indicateurs de qualité. Les indicateurs de qualité sont une métrique synthétique, que nous avons introduite dans nos travaux antérieurs, qui capture le flux de contrôle ainsi que différentes caractéristiques du logiciel. Nous avons exploré plusieurs techniques permettant d’orienter l’effort de test vers des composants à risque à partir de ces attributs de code source, en utilisant des algorithmes d’apprentissage automatique. En regroupant les métriques logicielles en familles, nous avons proposé une approche basée sur l’analyse du risque des classes logicielles. Les résultats que nous avons obtenus montrent les liens entre l’effort de test unitaire et les attributs de code source incluant les indicateurs de qualité, et suggèrent la possibilité d’orienter l’effort de test à l’aide des métriques.
Resumo:
O bin picking é um processo de grande interesse na indústria, uma vez que permite maior automatização, aumento da capacidade de produção e redução dos custos. Este tem vindo a evoluir bastante ao longo dos anos e essa evolução fez com que sistemas de perceção 3D começassem a ser implementados. Este trabalho tem como principal objetivo desenvolver um sistema de bin picking usando apenas perceção 3D. O sistema deve ser capaz de determinar a posição e orientação de objetos com diferentes formas e tamanhos, posicionados aleatoriamente numa superfície de trabalho. Os objetos utilizados para fazer os testes experimentais, são esferas, cilindros e prismas, uma vez que abrangem as formas geométricas existentes em muitos produtos submetidos a bin picking. Após a identi cação e seleção do objeto a apanhar, o manipulador deve autonomamente posicionar-se para fazer a aproximação e recolha do mesmo. A aquisição de dados é feita através de uma câmara Kinect. Dos dados recebidos apenas são trabalhados os referentes à profundidade, centrando-se assim este trabalho na análise e tratamento de nuvem de pontos. O sistema desenvolvido cumpre com os objetivos estabelecidos. Consegue localizar e apanhar objetos em várias posições e orientações. Além disso apresenta uma velocidade de processamento compatível com a aplicação em causa.
Resumo:
Edge-labeled graphs have proliferated rapidly over the last decade due to the increased popularity of social networks and the Semantic Web. In social networks, relationships between people are represented by edges and each edge is labeled with a semantic annotation. Hence, a huge single graph can express many different relationships between entities. The Semantic Web represents each single fragment of knowledge as a triple (subject, predicate, object), which is conceptually identical to an edge from subject to object labeled with predicates. A set of triples constitutes an edge-labeled graph on which knowledge inference is performed. Subgraph matching has been extensively used as a query language for patterns in the context of edge-labeled graphs. For example, in social networks, users can specify a subgraph matching query to find all people that have certain neighborhood relationships. Heavily used fragments of the SPARQL query language for the Semantic Web and graph queries of other graph DBMS can also be viewed as subgraph matching over large graphs. Though subgraph matching has been extensively studied as a query paradigm in the Semantic Web and in social networks, a user can get a large number of answers in response to a query. These answers can be shown to the user in accordance with an importance ranking. In this thesis proposal, we present four different scoring models along with scalable algorithms to find the top-k answers via a suite of intelligent pruning techniques. The suggested models consist of a practically important subset of the SPARQL query language augmented with some additional useful features. The first model called Substitution Importance Query (SIQ) identifies the top-k answers whose scores are calculated from matched vertices' properties in each answer in accordance with a user-specified notion of importance. The second model called Vertex Importance Query (VIQ) identifies important vertices in accordance with a user-defined scoring method that builds on top of various subgraphs articulated by the user. Approximate Importance Query (AIQ), our third model, allows partial and inexact matchings and returns top-k of them with a user-specified approximation terms and scoring functions. In the fourth model called Probabilistic Importance Query (PIQ), a query consists of several sub-blocks: one mandatory block that must be mapped and other blocks that can be opportunistically mapped. The probability is calculated from various aspects of answers such as the number of mapped blocks, vertices' properties in each block and so on and the most top-k probable answers are returned. An important distinguishing feature of our work is that we allow the user a huge amount of freedom in specifying: (i) what pattern and approximation he considers important, (ii) how to score answers - irrespective of whether they are vertices or substitution, and (iii) how to combine and aggregate scores generated by multiple patterns and/or multiple substitutions. Because so much power is given to the user, indexing is more challenging than in situations where additional restrictions are imposed on the queries the user can ask. The proposed algorithms for the first model can also be used for answering SPARQL queries with ORDER BY and LIMIT, and the method for the second model also works for SPARQL queries with GROUP BY, ORDER BY and LIMIT. We test our algorithms on multiple real-world graph databases, showing that our algorithms are far more efficient than popular triple stores.
Resumo:
Strawberries harvested for processing as frozen fruits are currently de-calyxed manually in the field. This process requires the removal of the stem cap with green leaves (i.e. the calyx) and incurs many disadvantages when performed by hand. Not only does it necessitate the need to maintain cutting tool sanitation, but it also increases labor time and exposure of the de-capped strawberries before in-plant processing. This leads to labor inefficiency and decreased harvest yield. By moving the calyx removal process from the fields to the processing plants, this new practice would reduce field labor and improve management and logistics, while increasing annual yield. As labor prices continue to increase, the strawberry industry has shown great interest in the development and implementation of an automated calyx removal system. In response, this dissertation describes the design, operation, and performance of a full-scale automatic vision-guided intelligent de-calyxing (AVID) prototype machine. The AVID machine utilizes commercially available equipment to produce a relatively low cost automated de-calyxing system that can be retrofitted into existing food processing facilities. This dissertation is broken up into five sections. The first two sections include a machine overview and a 12-week processing plant pilot study. Results of the pilot study indicate the AVID machine is able to de-calyx grade-1-with-cap conical strawberries at roughly 66 percent output weight yield at a throughput of 10,000 pounds per hour. The remaining three sections describe in detail the three main components of the machine: a strawberry loading and orientation conveyor, a machine vision system for calyx identification, and a synchronized multi-waterjet knife calyx removal system. In short, the loading system utilizes rotational energy to orient conical strawberries. The machine vision system determines cut locations through RGB real-time feature extraction. The high-speed multi-waterjet knife system uses direct drive actuation to locate 30,000 psi cutting streams to precise coordinates for calyx removal. Based on the observations and studies performed within this dissertation, the AVID machine is seen to be a viable option for automated high-throughput strawberry calyx removal. A summary of future tasks and further improvements is discussed at the end.
Resumo:
New and promising treatments for coronary heart disease are enabled by vascular scaffolds made of poly(L-lactic acid) (PLLA), as demonstrated by Abbott Vascular’s bioresorbable vascular scaffold. PLLA is a semicrystalline polymer whose degree of crystallinity and crystalline microstructure depend on the thermal and deformation history during processing. In turn, the semicrystalline morphology determines scaffold strength and biodegradation time. However, spatially-resolved information about the resulting material structure (crystallinity and crystal orientation) is needed to interpret in vivo observations.
The first manufacturing step of the scaffold is tube expansion in a process similar to injection blow molding. Spatial uniformity of the tube microstructure is essential for the consistent production and performance of the final scaffold. For implantation into the artery, solid-state deformation below the glass transition temperature is imposed on a laser-cut subassembly to crimp it into a small diameter. Regions of localized strain during crimping are implicated in deployment behavior.
To examine the semicrystalline microstructure development of the scaffold, we employed complementary techniques of scanning electron and polarized light microscopy, wide-angle X-ray scattering, and X-ray microdiffraction. These techniques enabled us to assess the microstructure at the micro and nano length scale. The results show that the expanded tube is very uniform in the azimuthal and axial directions and that radial variations are more pronounced. The crimping step dramatically changes the microstructure of the subassembly by imposing extreme elongation and compression. Spatial information on the degree and direction of chain orientation from X-ray microdiffraction data gives insight into the mechanism by which the PLLA dissipates the stresses during crimping, without fracture. Finally, analysis of the microstructure after deployment shows that it is inherited from the crimping step and contributes to the scaffold’s successful implantation in vivo.
Resumo:
With recent advances in remote sensing processing technology, it has become more feasible to begin analysis of the enormous historic archive of remotely sensed data. This historical data provides valuable information on a wide variety of topics which can influence the lives of millions of people if processed correctly and in a timely manner. One such field of benefit is that of landslide mapping and inventory. This data provides a historical reference to those who live near high risk areas so future disasters may be avoided. In order to properly map landslides remotely, an optimum method must first be determined. Historically, mapping has been attempted using pixel based methods such as unsupervised and supervised classification. These methods are limited by their ability to only characterize an image spectrally based on single pixel values. This creates a result prone to false positives and often without meaningful objects created. Recently, several reliable methods of Object Oriented Analysis (OOA) have been developed which utilize a full range of spectral, spatial, textural, and contextual parameters to delineate regions of interest. A comparison of these two methods on a historical dataset of the landslide affected city of San Juan La Laguna, Guatemala has proven the benefits of OOA methods over those of unsupervised classification. Overall accuracies of 96.5% and 94.3% and F-score of 84.3% and 77.9% were achieved for OOA and unsupervised classification methods respectively. The greater difference in F-score is a result of the low precision values of unsupervised classification caused by poor false positive removal, the greatest shortcoming of this method.
Resumo:
Every space launch increases the overall amount of space debris. Satellites have limited awareness of nearby objects that might pose a collision hazard. Astrometric, radiometric, and thermal models for the study of space debris in low-Earth orbit have been developed. This modeled approach proposes analysis methods that provide increased Local Area Awareness for satellites in low-Earth and geostationary orbit. Local Area Awareness is defined as the ability to detect, characterize, and extract useful information regarding resident space objects as they move through the space environment surrounding a spacecraft. The study of space debris is of critical importance to all space-faring nations. Characterization efforts are proposed using long-wave infrared sensors for space-based observations of debris objects in low-Earth orbit. Long-wave infrared sensors are commercially available and do not require solar illumination to be observed, as their received signal is temperature dependent. The characterization of debris objects through means of passive imaging techniques allows for further studies into the origination, specifications, and future trajectory of debris objects. Conclusions are made regarding the aforementioned thermal analysis as a function of debris orbit, geometry, orientation with respect to time, and material properties. Development of a thermal model permits the characterization of debris objects based upon their received long-wave infrared signals. Information regarding the material type, size, and tumble-rate of the observed debris objects are extracted. This investigation proposes the utilization of long-wave infrared radiometric models of typical debris to develop techniques for the detection and characterization of debris objects via signal analysis of unresolved imagery. Knowledge regarding the orbital type and semi-major axis of the observed debris object are extracted via astrometric analysis. This knowledge may aid in the constraint of the admissible region for the initial orbit determination process. The resultant orbital information is then fused with the radiometric characterization analysis enabling further characterization efforts of the observed debris object. This fused analysis, yielding orbital, material, and thermal properties, significantly increases a satellite’s Local Area Awareness via an intimate understanding of the debris environment surrounding the spacecraft.
Resumo:
Des interventions ciblant l’amélioration cognitive sont de plus en plus à l’intérêt dans nombreux domaines, y compris la neuropsychologie. Bien qu'il existe de nombreuses méthodes pour maximiser le potentiel cognitif de quelqu’un, ils sont rarement appuyé par la recherche scientifique. D’abord, ce mémoire examine brièvement l'état des interventions d'amélioration cognitives. Il décrit premièrement les faiblesses observées dans ces pratiques et par conséquent il établit un modèle standard contre lequel on pourrait et devrait évaluer les diverses techniques ciblant l'amélioration cognitive. Une étude de recherche est ensuite présenté qui considère un nouvel outil de l'amélioration cognitive, une tâche d’entrainement perceptivo-cognitive : 3-dimensional multiple object tracking (3D-MOT). Il examine les preuves actuelles pour le 3D-MOT auprès du modèle standard proposé. Les résultats de ce projet démontrent de l’augmentation dans les capacités d’attention, de mémoire de travail visuel et de vitesse de traitement d’information. Cette étude représente la première étape dans la démarche vers l’établissement du 3D-MOT comme un outil d’amélioration cognitive.
Resumo:
Near-infrared polarimetry observation is a powerful tool to study the central sources at the center of the Milky Way. My aim of this thesis is to analyze the polarized emission present in the central few light years of the Galactic Center region, in particular the non-thermal polarized emission of Sagittarius~A* (Sgr~A*), the electromagnetic manifestation of the super-massive black hole, and the polarized emission of an infrared-excess source in the literature referred to as DSO/G2. This source is in orbit about Sgr~A*. In this thesis I focus onto the Galactic Center observations at $\lambda=2.2~\mu m$ ($K_\mathrm{s}$-band) in polarimetry mode during several epochs from 2004 to 2012. The near-infrared polarized observations have been carried out using the adaptive optics instrument NAOS/CONICA and Wollaston prism at the Very Large Telescope of ESO (European Southern Observatory). Linear polarization at 2.2 $\mu m$, its flux statistics and time variation, can be used to constrain the physical conditions of the accretion process onto the central super-massive black hole. I present a statistical analysis of polarized $K_\mathrm{s}$-band emission from Sgr~A* and investigate the most comprehensive sample of near-infrared polarimetric light curves of this source up to now. I find several polarized flux excursions during the years and obtain an exponent of about 4 for the power-law fitted to polarized flux density distribution of fluxes above 5~mJy. Therefore, this distribution is closely linked to the single state power-law distribution of the total $K_\mathrm{s}$-band flux densities reported earlier by us. I find polarization degrees of the order of 20\%$\pm$10\% and a preferred polarization angle of $13^o\pm15^o$. Based on simulations of polarimetric measurements given the observed flux density and its uncertainty in orthogonal polarimetry channels, I find that the uncertainties of polarization parameters under a total flux density of $\sim 2\,{\mathrm{mJy}}$ are probably dominated by observational uncertainties. At higher flux densities there are intrinsic variations of polarization degree and angle within rather well constrained ranges. Since the emission is most likely due to optically thin synchrotron radiation, the obtained preferred polarization angle is very likely reflecting the intrinsic orientation of the Sgr~A* system i.e. an accretion disk or jet/wind scenario coupled to the super-massive black hole. Our polarization statistics show that Sgr~A* must be a stable system, both in terms of geometry, and the accretion process. I also investigate an infrared-excess source called G2 or Dusty S-cluster Object (DSO) moving on a highly eccentric orbit around the Galaxy's central black hole, Sgr~A*. I use for the first time the near-infrared polarimetric imaging data to determine the nature and the properties of DSO and obtain an improved $K_\mathrm{s}$-band identification of this source in median polarimetry images of different observing years. The source starts to deviate from the stellar confusion in 2008 data and it does not show a flux density variability based on our data set. Furthermore, I measure the polarization degree and angle of this source and conclude based on the simulations on polarization parameters that it is an intrinsically polarized source with a varying polarization angle as it approaches Sgr~A* position. I use the interpretation of the DSO polarimetry measurements to assess its possible properties.
Resumo:
Des interventions ciblant l’amélioration cognitive sont de plus en plus à l’intérêt dans nombreux domaines, y compris la neuropsychologie. Bien qu'il existe de nombreuses méthodes pour maximiser le potentiel cognitif de quelqu’un, ils sont rarement appuyé par la recherche scientifique. D’abord, ce mémoire examine brièvement l'état des interventions d'amélioration cognitives. Il décrit premièrement les faiblesses observées dans ces pratiques et par conséquent il établit un modèle standard contre lequel on pourrait et devrait évaluer les diverses techniques ciblant l'amélioration cognitive. Une étude de recherche est ensuite présenté qui considère un nouvel outil de l'amélioration cognitive, une tâche d’entrainement perceptivo-cognitive : 3-dimensional multiple object tracking (3D-MOT). Il examine les preuves actuelles pour le 3D-MOT auprès du modèle standard proposé. Les résultats de ce projet démontrent de l’augmentation dans les capacités d’attention, de mémoire de travail visuel et de vitesse de traitement d’information. Cette étude représente la première étape dans la démarche vers l’établissement du 3D-MOT comme un outil d’amélioration cognitive.
Resumo:
Neural representations (NR) have emerged in the last few years as a powerful tool to represent signals from several domains, such as images, 3D shapes, or audio. Indeed, deep neural networks have been shown capable of approximating continuous functions that describe a given signal with theoretical infinite resolution. This finding allows obtaining representations whose memory footprint is fixed and decoupled from the resolution at which the underlying signal can be sampled, something that is not possible with traditional discrete representations, e.g., grids of pixels for images or voxels for 3D shapes. During the last two years, many techniques have been proposed to improve the capability of NR to approximate high-frequency details and to make the optimization procedures required to obtain NR less demanding both in terms of time and data requirements, motivating many researchers to deploy NR as the main form of data representation for complex pipelines. Following this line of research, we first show that NR can approximate precisely Unsigned Distance Functions, providing an effective way to represent garments that feature open 3D surfaces and unknown topology. Then, we present a pipeline to obtain in a few minutes a compact Neural Twin® for a given object, by exploiting the recent advances in modeling neural radiance fields. Furthermore, we move a step in the direction of adopting NR as a standalone representation, by considering the possibility of performing downstream tasks by processing directly the NR weights. We first show that deep neural networks can be compressed into compact latent codes. Then, we show how this technique can be exploited to perform deep learning on implicit neural representations (INR) of 3D shapes, by only looking at the weights of the networks.
Resumo:
The aim of this study was to evaluate fat substitute in processing of sausages prepared with surimi of waste from piramutaba filleting. The formulation ingredients were mixed with the fat substitutes added according to a fractional planning 2(4-1), where the independent variables, manioc starch (Ms), hydrogenated soy fat (F), texturized soybean protein (Tsp) and carrageenan (Cg) were evaluated on the responses of pH, texture (Tx), raw batter stability (RBS) and water holding capacity (WHC) of the sausage. Fat substitutes were evaluated in 11 formulations and the results showed that the greatest effects on the responses were found to Ms, F and Cg, being eliminated from the formulation Tsp. To find the best formulation for processing piramutaba sausage was made a complete factorial planning of 2(3) to evaluate the concentrations of fat substitutes in an enlarged range. The optimum condition found for fat substitutes in the sausages formulation were carrageenan (0.51%), manioc starch (1.45%) and fat (1.2%).
Resumo:
To investigate central auditory processing in children with unilateral stroke and to verify whether the hemisphere affected by the lesion influenced auditory competence. 23 children (13 male) between 7 and 16 years old were evaluated through speech-in-noise tests (auditory closure); dichotic digit test and staggered spondaic word test (selective attention); pitch pattern and duration pattern sequence tests (temporal processing) and their results were compared with control children. Auditory competence was established according to the performance in auditory analysis ability. Was verified similar performance between groups in auditory closure ability and pronounced deficits in selective attention and temporal processing abilities. Most children with stroke showed an impaired auditory ability in a moderate degree. Children with stroke showed deficits in auditory processing and the degree of impairment was not related to the hemisphere affected by the lesion.
Resumo:
The aim of this research was to analyze temporal auditory processing and phonological awareness in school-age children with benign childhood epilepsy with centrotemporal spikes (BECTS). Patient group (GI) consisted of 13 children diagnosed with BECTS. Control group (GII) consisted of 17 healthy children. After neurological and peripheral audiological assessment, children underwent a behavioral auditory evaluation and phonological awareness assessment. The procedures applied were: Gaps-in-Noise test (GIN), Duration Pattern test, and Phonological Awareness test (PCF). Results were compared between the groups and a correlation analysis was performed between temporal tasks and phonological awareness performance. GII performed significantly better than the children with BECTS (GI) in both GIN and Duration Pattern test (P < 0.001). GI performed significantly worse in all of the 4 categories of phonological awareness assessed: syllabic (P = 0.001), phonemic (P = 0.006), rhyme (P = 0.015) and alliteration (P = 0.010). Statistical analysis showed a significant positive correlation between the phonological awareness assessment and Duration Pattern test (P < 0.001). From the analysis of the results, it was concluded that children with BECTS may have difficulties in temporal resolution, temporal ordering, and phonological awareness skills. A correlation was observed between auditory temporal processing and phonological awareness in the suited sample.
Resumo:
The biofilm formation of Enterococcus faecalis and Enterococcus faecium isolated from the processing of ricotta on stainless steel coupons was evaluated, and the effect of cleaning and sanitization procedures in the control of these biofilms was determined. The formation of biofilms was observed while varying the incubation temperature (7, 25 and 39°C) and time (0, 1, 2, 4, 6 and 8days). At 7°C, the counts of E. faecalis and E. faecium were below 2log10CFU/cm(2). For the temperatures of 25 and 39°C, after 1day, the counts of E. faecalis and E. faecium were 5.75 and 6.07log10CFU/cm(2), respectively, which is characteristic of biofilm formation. The tested sanitation procedures a) acid-anionic tensioactive cleaning, b) anionic tensioactive cleaning+sanitizer and c) acid-anionic tensioactive cleaning+sanitizer were effective in removing the biofilms, reducing the counts to levels below 0.4log10CFU/cm(2). The sanitizer biguanide was the least effective, and peracetic acid was the most effective. These studies revealed the ability of enterococci to form biofilms and the importance of the cleaning step and the type of sanitizer used in sanitation processes for the effective removal of biofilms.