980 resultados para Point cloud processing
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Micellar solutions of polystyrene-block-polybutadiene and polystyrene-block-polyisoprene in propane are found to exhibit significantly lower cloud pressures than the corresponding hypothetical nonmicellar solutions. Such a cloud-pressure reduction indicates the extent to which micelle formation enhances the apparent diblock solubility in near-critical and hence compressible propane. Concentration-dependent pressure-temperature points beyond which no micelles can be formed, referred to as the micellization end points, are found to depend on the block type, size, and ratio. The cloud-pressure reduction and the micellization end point measured for styrene-diene diblocks in propane should be characteristic of all amphiphilic diblock copolymer solutions that form micelles in compressible solvents.
Resumo:
A specific manufacturing process to obtain continuous glass fiber-reinforced RIFE laminates was studied and some of their mechanical properties were evaluated. Young's modulus and maximum strength were measured by three-point bending test and tensile test using the Digital Image Correlation (DIC) technique. Adhesion tests, thermal analysis and microscopy were used to evaluate the fiber-matrix adhesion, which is very dependent on the sintering time. The composite material obtained had a Young's modulus of 14.2 GPa and ultimate strength of 165 MPa, which corresponds to approximately 24 times the modulus and six times the ultimate strength of pure RIFE. These results show that the RIFE composite, manufactured under specific conditions, has great potential to provide structural parts with a performance suitable for application in structural components. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Microprocessori basati su singolo processore (CPU), hanno visto una rapida crescita di performances ed un abbattimento dei costi per circa venti anni. Questi microprocessori hanno portato una potenza di calcolo nell’ordine del GFLOPS (Giga Floating Point Operation per Second) sui PC Desktop e centinaia di GFLOPS su clusters di server. Questa ascesa ha portato nuove funzionalità nei programmi, migliori interfacce utente e tanti altri vantaggi. Tuttavia questa crescita ha subito un brusco rallentamento nel 2003 a causa di consumi energetici sempre più elevati e problemi di dissipazione termica, che hanno impedito incrementi di frequenza di clock. I limiti fisici del silicio erano sempre più vicini. Per ovviare al problema i produttori di CPU (Central Processing Unit) hanno iniziato a progettare microprocessori multicore, scelta che ha avuto un impatto notevole sulla comunità degli sviluppatori, abituati a considerare il software come una serie di comandi sequenziali. Quindi i programmi che avevano sempre giovato di miglioramenti di prestazioni ad ogni nuova generazione di CPU, non hanno avuto incrementi di performance, in quanto essendo eseguiti su un solo core, non beneficiavano dell’intera potenza della CPU. Per sfruttare appieno la potenza delle nuove CPU la programmazione concorrente, precedentemente utilizzata solo su sistemi costosi o supercomputers, è diventata una pratica sempre più utilizzata dagli sviluppatori. Allo stesso tempo, l’industria videoludica ha conquistato una fetta di mercato notevole: solo nel 2013 verranno spesi quasi 100 miliardi di dollari fra hardware e software dedicati al gaming. Le software houses impegnate nello sviluppo di videogames, per rendere i loro titoli più accattivanti, puntano su motori grafici sempre più potenti e spesso scarsamente ottimizzati, rendendoli estremamente esosi in termini di performance. Per questo motivo i produttori di GPU (Graphic Processing Unit), specialmente nell’ultimo decennio, hanno dato vita ad una vera e propria rincorsa alle performances che li ha portati ad ottenere dei prodotti con capacità di calcolo vertiginose. Ma al contrario delle CPU che agli inizi del 2000 intrapresero la strada del multicore per continuare a favorire programmi sequenziali, le GPU sono diventate manycore, ovvero con centinaia e centinaia di piccoli cores che eseguono calcoli in parallelo. Questa immensa capacità di calcolo può essere utilizzata in altri campi applicativi? La risposta è si e l’obiettivo di questa tesi è proprio quello di constatare allo stato attuale, in che modo e con quale efficienza pùo un software generico, avvalersi dell’utilizzo della GPU invece della CPU.
Resumo:
In recent years, the use of Reverse Engineering systems has got a considerable interest for a wide number of applications. Therefore, many research activities are focused on accuracy and precision of the acquired data and post processing phase improvements. In this context, this PhD Thesis deals with the definition of two novel methods for data post processing and data fusion between physical and geometrical information. In particular a technique has been defined for error definition in 3D points’ coordinates acquired by an optical triangulation laser scanner, with the aim to identify adequate correction arrays to apply under different acquisition parameters and operative conditions. Systematic error in data acquired is thus compensated, in order to increase accuracy value. Moreover, the definition of a 3D thermogram is examined. Object geometrical information and its thermal properties, coming from a thermographic inspection, are combined in order to have a temperature value for each recognizable point. Data acquired by an optical triangulation laser scanner are also used to normalize temperature values and make thermal data independent from thermal-camera point of view.
Resumo:
The full blood cell (FBC) count is the most common indicator of diseases. At present hematology analyzers are used for the blood cell characterization, but, recently, there has been interest in using techniques that take advantage of microscale devices and intrinsic properties of cells for increased automation and decreased cost. Microfluidic technologies offer solutions to handling and processing small volumes of blood (2-50 uL taken by finger prick) for point-of-care(PoC) applications. Several PoC blood analyzers are in use and may have applications in the fields of telemedicine, out patient monitoring and medical care in resource limited settings. They have the advantage to be easy to move and much cheaper than traditional analyzers, which require bulky instruments and consume large amount of reagents. The development of miniaturized point-of-care diagnostic tests may be enabled by chip-based technologies for cell separation and sorting. Many current diagnostic tests depend on fractionated blood components: plasma, red blood cells (RBCs), white blood cells (WBCs), and platelets. Specifically, white blood cell differentiation and counting provide valuable information for diagnostic purposes. For example, a low number of WBCs, called leukopenia, may be an indicator of bone marrow deficiency or failure, collagen- vascular diseases, disease of the liver or spleen. The leukocytosis, a high number of WBCs, may be due to anemia, infectious diseases, leukemia or tissue damage. In the laboratory of hybrid biodevices, at the University of Southampton,it was developed a functioning micro impedance cytometer technology for WBC differentiation and counting. It is capable to classify cells and particles on the base of their dielectric properties, in addition to their size, without the need of labeling, in a flow format similar to that of a traditional flow cytometer. It was demonstrated that the micro impedance cytometer system can detect and differentiate monocytes, neutrophils and lymphocytes, which are the three major human leukocyte populations. The simplicity and portability of the microfluidic impedance chip offer a range of potential applications in cell analysis including point-of-care diagnostic systems. The microfluidic device has been integrated into a sample preparation cartridge that semi-automatically performs erythrocyte lysis before leukocyte analysis. Generally erythrocytes are manually lysed according to a specific chemical lysis protocol, but this process has been automated in the cartridge. In this research work the chemical lysis protocol, defined in the patent US 5155044 A, was optimized in order to improve white blood cell differentiation and count performed by the integrated cartridge.
Resumo:
La tesi propone una soluzione middleware per scenari in cui i sensori producono un numero elevato di dati che è necessario gestire ed elaborare attraverso operazioni di preprocessing, filtering e buffering al fine di migliorare l'efficienza di comunicazione e del consumo di banda nel rispetto di vincoli energetici e computazionali. E'possibile effettuare l'ottimizzazione di questi componenti attraverso operazioni di tuning remoto.
Resumo:
This work covers the synthesis of second-generation, ethylene glycol dendrons covalently linked to a surface anchor that contains two, three, or four catechol groups, the molecular assembly in aqueous buffer on titanium oxide surfaces, and the evaluation of the resistance of the monomolecular adlayers against nonspecific protein adsorption in contact with full blood serum. The results were compared to those of a linear poly(ethylene glycol) (PEG) analogue with the same molecular weight. The adsorption kinetics as well as resulting surface coverages were monitored by ex situ spectroscopic ellipsometry (VASE), in situ optical waveguide lightmode spectroscopy (OWLS), and quartz crystal microbalance with dissipation (QCM-D) investigations. The expected compositions of the macromolecular films were verified by X-ray photoelectron spectroscopy (XPS). The results of the adsorption study, performed in a high ionic strength ("cloud-point") buffer at room temperature, demonstrate that the adsorption kinetics increase with increasing number of catechol binding moieties and exceed the values found for the linear PEG analogue. This is attributed to the comparatively smaller and more confined molecular volume of the dendritic macromolecules in solution, the improved presentation of the catechol anchor, and/or their much lower cloud-point in the chosen buffer (close to room temperature). Interestingly, in terms of mechanistic aspects of "nonfouling" surface properties, the dendron films were found to be much stiffer and considerably less hydrated in comparison to the linear PEG brush surface, closer in their physicochemical properties to oligo(ethylene glycol) alkanethiol self-assembled monolayers than to conventional brush surfaces. Despite these differences, both types of polymer architectures at saturation coverage proved to be highly resistant toward protein adsorption. Although associated with higher synthesis costs, dendritic macromolecules are considered to be an attractive alternative to linear polymers for surface (bio)functionalization in view of their spontaneous formation of ultrathin, confluent, and nonfouling monolayers at room temperature and their outstanding ability to present functional ligands (coupled to the termini of the dendritic structure) at high surface densities.
Resumo:
Sustainable yields from water wells in hard-rock aquifers are achieved when the well bore intersects fracture networks. Fracture networks are often not readily discernable at the surface. Lineament analysis using remotely sensed satellite imagery has been employed to identify surface expressions of fracturing, and a variety of image-analysis techniques have been successfully applied in “ideal” settings. An ideal setting for lineament detection is where the influences of human development, vegetation, and climatic situations are minimal and hydrogeological conditions and geologic structure are known. There is not yet a well-accepted protocol for mapping lineaments nor have different approaches been compared in non-ideal settings. A new approach for image-processing/synthesis was developed to identify successful satellite imagery types for lineament analysis in non-ideal terrain. Four satellite sensors (ASTER, Landsat7 ETM+, QuickBird, RADARSAT-1) and a digital elevation model were evaluated for lineament analysis in Boaco, Nicaragua, where the landscape is subject to varied vegetative cover, a plethora of anthropogenic features, and frequent cloud cover that limit the availability of optical satellite data. A variety of digital image processing techniques were employed and lineament interpretations were performed to obtain 12 complementary image products that were evaluated subjectively to identify lineaments. The 12 lineament interpretations were synthesized to create a raster image of lineament zone coincidence that shows the level of agreement among the 12 interpretations. A composite lineament interpretation was made using the coincidence raster to restrict lineament observations to areas where multiple interpretations (at least 4) agree. Nine of the 11 previously mapped faults were identified from the coincidence raster. An additional 26 lineaments were identified from the coincidence raster, and the locations of 10 were confirmed by field observation. Four manual pumping tests suggest that well productivity is higher for wells proximal to lineament features. Interpretations from RADARSAT-1 products were superior to interpretations from other sensor products, suggesting that quality lineament interpretation in this region requires anthropogenic features to be minimized and topographic expressions to be maximized. The approach developed in this study has the potential to improve siting wells in non-ideal regions.
Resumo:
In recent years, advanced metering infrastructure (AMI) has been the main research focus due to the traditional power grid has been restricted to meet development requirements. There has been an ongoing effort to increase the number of AMI devices that provide real-time data readings to improve system observability. Deployed AMI across distribution secondary networks provides load and consumption information for individual households which can improve grid management. Significant upgrade costs associated with retrofitting existing meters with network-capable sensing can be made more economical by using image processing methods to extract usage information from images of the existing meters. This thesis presents a new solution that uses online data exchange of power consumption information to a cloud server without modifying the existing electromechanical analog meters. In this framework, application of a systematic approach to extract energy data from images replaces the manual reading process. One case study illustrates the digital imaging approach is compared to the averages determined by visual readings over a one-month period.
Resumo:
The development of the Internet has made it possible to transfer data ‘around the globe at the click of a mouse’. Especially fresh business models such as cloud computing, the newest driver to illustrate the speed and breadth of the online environment, allow this data to be processed across national borders on a routine basis. A number of factors cause the Internet to blur the lines between public and private space: Firstly, globalization and the outsourcing of economic actors entrain an ever-growing exchange of personal data. Secondly, the security pressure in the name of the legitimate fight against terrorism opens the access to a significant amount of data for an increasing number of public authorities.And finally,the tools of the digital society accompany everyone at each stage of life by leaving permanent individual and borderless traces in both space and time. Therefore, calls from both the public and private sectors for an international legal framework for privacy and data protection have become louder. Companies such as Google and Facebook have also come under continuous pressure from governments and citizens to reform the use of data. Thus, Google was not alone in calling for the creation of ‘global privacystandards’. Efforts are underway to review established privacy foundation documents. There are similar efforts to look at standards in global approaches to privacy and data protection. The last remarkable steps were the Montreux Declaration, in which the privacycommissioners appealed to the United Nations ‘to prepare a binding legal instrument which clearly sets out in detail the rights to data protection and privacy as enforceable human rights’. This appeal was repeated in 2008 at the 30thinternational conference held in Strasbourg, at the 31stconference 2009 in Madrid and in 2010 at the 32ndconference in Jerusalem. In a globalized world, free data flow has become an everyday need. Thus, the aim of global harmonization should be that it doesn’t make any difference for data users or data subjects whether data processing takes place in one or in several countries. Concern has been expressed that data users might seek to avoid privacy controls by moving their operations to countries which have lower standards in their privacy laws or no such laws at all. To control that risk, some countries have implemented special controls into their domestic law. Again, such controls may interfere with the need for free international data flow. A formula has to be found to make sure that privacy at the international level does not prejudice this principle.
Resumo:
Applying location-focused data protection law within the context of a location-agnostic cloud computing framework is fraught with difficulties. While the Proposed EU Data Protection Regulation has introduced a lot of changes to the current data protection framework, the complexities of data processing in the cloud involve various layers and intermediaries of actors that have not been properly addressed. This leaves some gaps in the regulation when analyzed in cloud scenarios. This paper gives a brief overview of the relevant provisions of the regulation that will have an impact on cloud transactions and addresses the missing links. It is hoped that these loopholes will be reconsidered before the final version of the law is passed in order to avoid unintended consequences.
Resumo:
The 3' cleavage generating non-polyadenylated animal histone mRNAs depends on the base pairing between U7 snRNA and a conserved histone pre-mRNA downstream element. This interaction is enhanced by a 100 kDa zinc finger protein (ZFP100) that forms a bridge between an RNA hairpin element upstream of the processing site and the U7 small nuclear ribonucleoprotein (snRNP). The N-terminus of Lsm11, a U7-specific Sm-like protein, was shown to be crucial for histone RNA processing and to bind ZFP100. By further analysing these two functions of Lsm11, we find that Lsm11 and ZFP100 can undergo two interactions, i.e. between the Lsm11 N-terminus and the zinc finger repeats of ZFP100, and between the N-terminus of ZFP100 and the Sm domain of Lsm11, respectively. Both interactions are not specific for the two proteins in vitro, but the second interaction is sufficient for a specific recognition of the U7 snRNP by ZFP100 in cell extracts. Furthermore, clustered point mutations in three phylogenetically conserved regions of the Lsm11 N-terminus impair or abolish histone RNA processing. As these mutations have no effect on the two interactions with ZFP100, these protein regions must play other roles in histone RNA processing, e.g. by contacting the pre-mRNA or additional processing factors.
Resumo:
This paper addresses the novel notion of offering a radio access network as a service. Its components may be instantiated on general purpose platforms with pooled resources (both radio and hardware ones) dimensioned on-demand, elastically and following the pay-per-use principle. A novel architecture is proposed that supports this concept. The architecture's success is in its modularity, well-defined functional elements and clean separation between operational and control functions. By moving much processing traditionally located in hardware for computation in the cloud, it allows the optimisation of hardware utilization and reduction of deployment and operation costs. It enables operators to upgrade their network as well as quickly deploy and adapt resources to demand. Also, new players may easily enter the market, permitting a virtual network operator to provide connectivity to its users.
Resumo:
BACKGROUND Loss-of-function point mutations in the cathepsin C gene are the underlying genetic event in patients with Papillon-Lefèvre syndrome (PLS). PLS neutrophils lack serine protease activity essential for cathelicidin LL-37 generation from hCAP18 precursor. AIM We hypothesized that a local deficiency of LL-37 in the infected periodontium is mainly responsible for one of the clinical hallmark of PLS: severe periodontitis already in early childhood. METHODS To confirm this effect, we compared the level of neutrophil-derived enzymes and antimicrobial peptides in gingival crevicular fluid (GCF) and saliva from PLS, aggressive and chronic periodontitis patients. RESULTS Although neutrophil numbers in GCF were present at the same level in all periodontitis groups, LL-37 was totally absent in GCF from PLS patients despite the large amounts of its precursor, hCAP18. The absence of LL-37 in PLS patients coincided with the deficiency of both cathepsin C and protease 3 activities. The presence of other neutrophilic anti-microbial peptides in GCF from PLS patients, such as alpha-defensins, were comparable to that found in chronic periodontitis. In PLS microbial analysis revealed a high prevalence of Aggregatibacter actinomycetemcomitans infection. Most strains were susceptible to killing by LL-37. CONCLUSIONS Collectively, these findings imply that the lack of protease 3 activation by dysfunctional cathepsin C in PLS patients leads to the deficit of antimicrobial and immunomodulatory functions of LL-37 in the gingiva, allowing for infection with A. actinomycetemcomitans and the development of severe periodontal disease.