991 resultados para digital capture
Resumo:
Digital image correlation (DIC) is applied to analyzing the deformation mechanisms under transverse compression in a fiber-reinforced composite. To this end, compression tests in a direction perpendicular to the fibers were carried out inside a scanning electron microscope and secondary electron images obtained at different magnifications during the test. Optimum DIC parameters to resolve the displacement and strain field were computed from numerical simulations of a model composite and they were applied to micrographs obtained at different magnifications (250_, 2000_, and 6000_). It is shown that DIC of low-magnification micrographs was able to capture the long range fluctuations in strain due to the presence of matrix-rich and fiber-rich zones, responsible for the onset of damage. At higher magnification, the strain fields obtained with DIC qualitatively reproduce the non-homogeneous deformation pattern due to the presence of stiff fibers dispersed in a compliant matrix and provide accurate results of the average composite strain. However, comparison with finite element simulations revealed that DIC was not able to accurately capture the average strain in each phase.
Resumo:
O mercado consumidor passou por diversas transformações ao longo do tempo devido principalmente à evolução tecnológica. A evolução tecnológica proporcionou ao consumidor a possibilidade de escolher por produtos e marcas, e permite a oportunidade de colaborar e influenciar a opinião de outros consumidores através do compartilhamento de experiências, principalmente através da utilização de plataformas digitais. O CRM (gerenciamento do relacionamento com o consumidor) é a forma utilizada pelas empresas para conhecerem o consumidor e criar um relacionamento satisfatório entre empresa e consumidor. Esse relacionamento tem o intuito de satisfazer e fidelizar o consumidor, evitando que ele deixe de consumir a marca e evitando que ele influencie negativamente outros consumidores. O e-CRM é o gerenciamento eletrônico do relacionamento com o consumidor, que possui todas as tradicionais características do CRM, porém com o incremento do ambiente digital. O ambiente digital diminuiu a distância entre pessoas e empresas e se tornou um meio colaborativo de baixo custo de interação com o consumidor. Por outro lado, este é um meio onde o consumidor deixa de ser passivo e se torna ativo, o que o torna capaz de influenciar não só um pequeno grupo de amigos, mas toda uma rede de consumidores. A digital analytics é a medição, coleta, análise e elaboração de relatórios de dados digitais para os propósitos de entendimento e otimização da performance em negócios. A utilização de dados digitais auxilia no desenvolvimento do e-CRM através da compreensão do comportamento do consumidor em um ambiente onde o consumidor é ativo. O ambiente digital permite um conhecimento mais detalhado dos consumidores, baseado não somente nos hábitos de compra, mas também nos interesses e interações. Este estudo tem como objetivo principal compreender como as empresas aplicam os conceitos do e-CRM em suas estratégias de negócios, compreendendo de que forma a digital analytics contribui para o desenvolvimento do e-CRM, e compreendendo como os fatores críticos de sucesso (humano, tecnológico e estratégico) impactam na implantação e desenvolvimento do e-CRM. Quatro empresas de diferentes segmentos foram estudadas através da aplicação de estudo de caso. As empresas buscam cada vez mais explorar as estratégias de e-CRM no ambiente digital, porém existem limitações identificadas devido à captação, armazenamento e análise de informações multicanais, principalmente considerando os canais digitais. Outros fatores como o apoio da alta direção e a compreensão de funcionários para lidar com estratégias focadas no consumidor único também foram identificados neste estudo. O estudo foi capaz de identificar as informações mais relevantes para a geração de estratégias de gerenciamento eletrônico do relacionamento com o consumidor e identificou os aspectos mais relevantes dos fatores críticos de sucesso.
Resumo:
The international perspectives on these issues are especially valuable in an increasingly connected, but still institutionally and administratively diverse world. The research addressed in several chapters in this volume includes issues around technical standards bodies like EpiDoc and the TEI, engaging with ways these standards are implemented, documented, taught, used in the process of transcribing and annotating texts, and used to generate publications and as the basis for advanced textual or corpus research. Other chapters focus on various aspects of philological research and content creation, including collaborative or community driven efforts, and the issues surrounding editorial oversight, curation, maintenance and sustainability of these resources. Research into the ancient languages and linguistics, in particular Greek, and the language teaching that is a staple of our discipline, are also discussed in several chapters, in particular for ways in which advanced research methods can lead into language technologies and vice versa and ways in which the skills around teaching can be used for public engagement, and vice versa. A common thread through much of the volume is the importance of open access publication or open source development and distribution of texts, materials, tools and standards, both because of the public good provided by such models (circulating materials often already paid for out of the public purse), and the ability to reach non-standard audiences, those who cannot access rich university libraries or afford expensive print volumes. Linked Open Data is another technology that results in wide and free distribution of structured information both within and outside academic circles, and several chapters present academic work that includes ontologies and RDF, either as a direct research output or as essential part of the communication and knowledge representation. Several chapters focus not on the literary and philological side of classics, but on the study of cultural heritage, archaeology, and the material supports on which original textual and artistic material are engraved or otherwise inscribed, addressing both the capture and analysis of artefacts in both 2D and 3D, the representation of data through archaeological standards, and the importance of sharing information and expertise between the several domains both within and without academia that study, record and conserve ancient objects. Almost without exception, the authors reflect on the issues of interdisciplinarity and collaboration, the relationship between their research practice and teaching and/or communication with a wider public, and the importance of the role of the academic researcher in contemporary society and in the context of cutting edge technologies. How research is communicated in a world of instant- access blogging and 140-character micromessaging, and how our expectations of the media affect not only how we publish but how we conduct our research, are questions about which all scholars need to be aware and self-critical.
Resumo:
Purpose: To compare graticule and image capture assessment of the lower tear film meniscus height (TMH). Methods: Lower tear film meniscus height measures were taken in the right eyes of 55 healthy subjects at two study visits separated by 6 months. Two images of the TMH were captured in each subject with a digital camera attached to a slit-lamp biomicroscope and stored in a computer for future analysis. Using the best of two images, the TMH was quantified by manually drawing a line across the tear meniscus profile, following which the TMH was measured in pixels and converted into millimetres, where one pixel corresponded to 0.0018 mm. Additionally, graticule measures were carried out by direct observation using a calibrated graticule inserted into the same slit-lamp eyepiece. The graticule was calibrated so that actual readings, in 0.03 mm increments, could be made with a 40× ocular. Results: Smaller values of TMH were found in this study compared to previous studies. TMH, as measured with the image capture technique (0.13 ± 0.04 mm), was significantly greater (by approximately 0.01 ± 0.05 mm, p = 0.03) than that measured with the graticule technique (0.12 ± 0.05 mm). No bias was found across the range sampled. Repeatability of the TMH measurements taken at two study visits showed that graticule measures were significantly different (0.02 ± 0.05 mm, p = 0.01) and highly correlated (r = 0.52, p < 0.0001), whereas image capture measures were similar (0.01 ± 0.03 mm, p = 0.16), and also highly correlated (r = 0.56, p < 0.0001). Conclusions: Although graticule and image analysis techniques showed similar mean values for TMH, the image capture technique was more repeatable than the graticule technique and this can be attributed to the higher measurement resolution of the image capture (i.e. 0.0018 mm) compared to the graticule technique (i.e. 0.03 mm). © 2006 British Contact Lens Association.
Resumo:
In this paper, we proposed a new method using long digital straight segments (LDSSs) for fingerprint recognition based on such a discovery that LDSSs in fingerprints can accurately characterize the global structure of fingerprints. Different from the estimation of orientation using the slope of the straight segments, the length of LDSSs provides a measure for stability of the estimated orientation. In addition, each digital straight segment can be represented by four parameters: x-coordinate, y-coordinate, slope and length. As a result, only about 600 bytes are needed to store all the parameters of LDSSs of a fingerprint, as is much less than the storage orientation field needs. Finally, the LDSSs can well capture the structural information of local regions. Consequently, LDSSs are more feasible to apply to the matching process than orientation fields. The experiments conducted on fingerprint databases FVC2002 DB3a and DB4a show that our method is effective.
Resumo:
The permanent pigmentation of the leaves of tropical rain forest herbs with anthocyanin has traditionally been viewed as a mechanism for enhancing transpiration by increased heat absorption. We report measurements to ?+0.1?0C on four Indo-mal- esian forest species polymorphic with respect to color. There were no detectable differences in temperature between cyanic and green leaves. In deeply shaded habitats, any temperature difference would arise from black-body infrared radiation which all leaves absorb and to which anthocyanins are transparent. Reflectance spectra of the lower leaf surfaces of these species re- vealed increased reflectance around 650-750 nm for cyanic leaves compared with green leaves of the same species. In all spe- cies anthocyanin was located in a single layer of cells immediately below the photosynthetic tissue. These observations provide empirical evidence that the cyanic layer can improve photosynthetic energy capture by back-scattering additional light through the photosynthetic tissue.
Resumo:
Fossil fuels constitute a significant fraction of the world's energy demand. The burning of fossil fuels emits huge amounts of carbon dioxide into the atmosphere. Therefore, the limited availability of fossil fuel resources and the environmental impact of their use require a change to alternative energy sources or carriers (such as hydrogen) in the foreseeable future. The development of methods to mitigate carbon dioxide emission into the atmosphere is equally important. Hence, extensive research has been carried out on the development of cost-effective technologies for carbon dioxide capture and techniques to establish hydrogen economy. Hydrogen is a clean energy fuel with a very high specific energy content of about 120MJ/kg and an energy density of 10Wh/kg. However, its potential is limited by the lack of environment-friendly production methods and a suitable storage medium. Conventional hydrogen production methods such as Steam-methane-reformation and Coal-gasification were modified by the inclusion of NaOH. The modified methods are thermodynamically more favorable and can be regarded as near-zero emission production routes. Further, suitable catalysts were employed to accelerate the proposed NaOH-assisted reactions and a relation between reaction yield and catalyst size has been established. A 1:1:1 molar mixture of LiAlH 4, NaNH2 and MgH2 were investigated as a potential hydrogen storage medium. The hydrogen desorption mechanism was explored using in-situ XRD and Raman Spectroscopy. Mesoporous metal oxides were assessed for CO2 capture at both power and non-power sectors. A 96.96% of mesoporous MgO (325 mesh size, surface area = 95.08 ± 1.5 m2/g) was converted to MgCO 3 at 350°C and 10 bars CO2. But the absorption capacity of 1h ball milled zinc oxide was low, 0.198 gCO2 /gZnO at 75°C and 10 bars CO2. Interestingly, 57% mass conversion of Fe and Fe 3O4 mixture to FeCO3 was observed at 200°C and 10 bars CO2. MgO, ZnO and Fe3O4 could be completely regenerated at 550°C, 250°C and 350°C respectively. Furthermore, the possible retrofit of MgO and a mixture of Fe and Fe3O 4 to a 300 MWe coal-fired power plant and iron making industry were also evaluated.
Resumo:
Carbon capture and storage (CCS) can contribute significantly to addressing the global greenhouse gas (GHG) emissions problem. Despite widespread political support, CCS remains unknown to the general public. Public perception researchers have found that, when asked, the public is relatively unfamiliar with CCS yet many individuals voice specific safety concerns regarding the technology. We believe this leads many stakeholders conflate CCS with the better-known and more visible technology hydraulic fracturing (fracking). We support this with content analysis of media coverage, web analytics, and public lobbying records. Furthermore, we present results from a survey of United States residents. This first-of-its-kind survey assessed participants’ knowledge, opinions and support of CCS and fracking technologies. The survey showed that participants had more knowledge of fracking than CCS, and that knowledge of fracking made participants less willing to support CCS projects. Additionally, it showed that participants viewed the two technologies as having similar risks and similar risk intensities. In the CCS stakeholder literature, judgment and decision-making (JDM) frameworks are noticeably absent, and public perception is not discussed using any cognitive biases as a way of understanding or explaining irrational decisions, yet these survey results show evidence of both anchoring bias and the ambiguity effect. Public acceptance of CCS is essential for a national low-carbon future plan. In conclusion, we propose changes in communications and incentives as programs to increase support of CCS.
Resumo:
This thesis demonstrates a new way to achieve sparse biological sample detection, which uses magnetic bead manipulation on a digital microfluidic device. Sparse sample detection was made possible through two steps: sparse sample capture and fluorescent signal detection. For the first step, the immunological reaction between antibody and antigen enables the binding between target cells and antibody-‐‑ coated magnetic beads, hence achieving sample capture. For the second step, fluorescent detection is achieved via fluorescent signal measurement and magnetic bead manipulation. In those two steps, a total of three functions need to work together, namely magnetic beads manipulation, fluorescent signal measurement and immunological binding. The first function is magnetic bead manipulation, and it uses the structure of current-‐‑carrying wires embedded in the actuation electrode of an electrowetting-‐‑on-‐‑dielectric (EWD) device. The current wire structure serves as a microelectromagnet, which is capable of segregating and separating magnetic beads. The device can achieve high segregation efficiency when the wire spacing is 50µμm, and it is also capable of separating two kinds of magnetic beads within a 65µμm distance. The device ensures that the magnetic bead manipulation and the EWD function can be operated simultaneously without introducing additional steps in the fabrication process. Half circle shaped current wires were designed in later devices to concentrate magnetic beads in order to increase the SNR of sample detection. The second function is immunological binding. Immunological reaction kits were selected in order to ensure the compatibility of target cells, magnetic bead function and EWD function. The magnetic bead choice ensures the binding efficiency and survivability of target cells. The magnetic bead selection and binding mechanism used in this work can be applied to a wide variety of samples with a simple switch of the type of antibody. The last function is fluorescent measurement. Fluorescent measurement of sparse samples is made possible of using fluorescent stains and a method to increase SNR. The improved SNR is achieved by target cell concentration and reduced sensing area. Theoretical limitations of the entire sparse sample detection system is as low as 1 Colony Forming Unit/mL (CFU/mL).
Resumo:
Mainstream cinema is to an ever-increasing degree deploying digital imaging technologies to work with the human form; expanding on it, morphing its features, or providing new ways of presenting it. This has prompted theories of simulation and virtualisation to explore the cultural and aesthetic implications, anxieties, and possibilities of a loss of the ‘real’ – in turn often defined in terms of the photographic trace. This thesis wants to provide another perspective. Following instead some recent imperatives in art-theory, this study looks to introduce and expand on the notion of the human figure, as pertaining to processes of figuration rather than (only) representation. The notion of the figure and figuration have an extended history in the fields of hermeneutics, aesthetics, and philosophy, through which they have come to stand for particular theories and methodologies with regards to images and their communication of meaning. This objective of this study is to appropriate these for film-theory, culminating in two case-studies to demonstrate how formal parameters present and organise ideas of the body and the human. The aim is to develop a material approach to contemporary digital practices, where bodies have not ceased to matter but are framed in new ways by new technologies.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
The Exhibitium Project , awarded by the BBVA Foundation, is a data-driven project developed by an international consortium of research groups . One of its main objectives is to build a prototype that will serve as a base to produce a platform for the recording and exploitation of data about art-exhibitions available on the Internet . Therefore, our proposal aims to expose the methods, procedures and decision-making processes that have governed the technological implementation of this prototype, especially with regard to the reuse of WordPress (WP) as development framework.
Resumo:
L’incessante urbanizzazione e il continuo aumento della popolazione urbana stanno generando nuove sfide per le pubbliche amministrazioni, le quali necessitano soluzioni per la gestione sostenibile di risorse primarie (cibo, acqua, suolo, fonti energetiche) e la corretta pianificazione delle città, allo scopo di salvaguardare le condizioni ambientali, la salute dei cittadini e il progresso economico: in questo complesso panorama si afferma il concetto di Digital Twin City (DTC) o gemello digitale della città, la controparte virtuale di oggetti e processi in ambiente urbano in grado di comunicare con essi e di simularne, replicarne e predirne i possibili scenari. In questo contesto, un elemento essenziale è costituito dal modello geometrico 3D ad alta fedeltà dell'ambiente urbano, e la Geomatica fornisce gli strumenti più idonei alla sua realizzazione. La presente elaborazione si sviluppa in tre parti: nella prima è stata condotta un’analisi della letteratura sui DTC, in cui sono state evidenziate le sue caratteristiche principali come l’architettura, le tecnologie abilitanti, alcune possibili modellazioni, ostacoli, scenari futuri ed esempi di applicazioni reali, giungendo alla conclusione che un accurato modello 3D della città deve essere alla base dei DTC; nella seconda parte è stata illustrata nel dettaglio la teoria delle principali tecniche geomatiche per la realizzazione di modelli 3D ad alta fedeltà, tra cui le tecniche fotogrammetriche di aerotriangolazione e l’algoritmo Structure from Motion (SfM); nella terza e ultima parte è stata condotta una sperimentazione su tre zone campione del comune di Bologna di cui, grazie ad un dataset di fotogrammi nadirali e obliqui ottenuto da un volo fotogrammetrico realizzato nel 2022, sono state ottenute Reality Meshes e ortofoto/DSM. I prodotti della terza zona sono stati confrontati con i medesimi ottenuti da un dataset del 2017. Infine, sono state illustrati alcuni strumenti di misura e di ritocco dei prodotti 3D e 2D.
Resumo:
Several medical and dental schools have described their experience in the transition from conventional to digital microscopy in the teaching of general pathology and histology disciplines; however, this transitional process has scarcely been reported in the teaching of oral pathology. Therefore, the objective of the current study is to report the transition from conventional glass slide to virtual microscopy in oral pathology teaching, a unique experience in Latin America. An Aperio ScanScope® scanner was used to digitalize histological slides used in practical lectures of oral pathology. The challenges and benefits observed by the group of Professors from the Piracicaba Dental School (Brazil) are described and a questionnaire to evaluate the students' compliance to this new methodology was applied. An improvement in the classes was described by the Professors who mainly dealt with questions related to pathological changes instead of technical problems; also, a higher interaction with the students was described. The simplicity of the software used and the high quality of the virtual slides, requiring a smaller time to identify microscopic structures, were considered important for a better teaching process. Virtual microscopy used to teach oral pathology represents a useful educational methodology, with an excellent compliance of the dental students.
Resumo:
Remotely sensed imagery has been widely used for land use/cover classification thanks to the periodic data acquisition and the widespread use of digital image processing systems offering a wide range of classification algorithms. The aim of this work was to evaluate some of the most commonly used supervised and unsupervised classification algorithms under different landscape patterns found in Rondônia, including (1) areas of mid-size farms, (2) fish-bone settlements and (3) a gradient of forest and Cerrado (Brazilian savannah). Comparison with a reference map based on the kappa statistics resulted in good to superior indicators (best results - K-means: k=0.68; k=0.77; k=0.64 and MaxVer: k=0.71; k=0.89; k=0.70 respectively for three areas mentioned). Results show that choosing a specific algorithm requires to take into account both its capacity to discriminate among various spectral signatures under different landscape patterns as well as a cost/benefit analysis considering the different steps performed by the operator performing a land cover/use map. it is suggested that a more systematic assessment of several options of implementation of a specific project is needed prior to beginning a land use/cover mapping job.