869 resultados para ARTIFACTS
Resumo:
The theoretical foundation of this study comes from the significant recurrence throughout the leadership literature of two distinct behaviors, task orientation and relationship orientation. Task orientation and relationship orientation are assumed to be generic behaviors, which are universally observed and applied in organizations, even though they may be uniquely enacted in organizations across cultures. The lack of empirical evidence supporting these assumptions provided the impetus to hypothetically develop and empirically confirm the universal application of task orientation and relationship orientation and the generalizability of their measurement in a cross-cultural setting. Task orientation and relationship orientation are operationalized through consideration and initiation of structure, two well-established theoretical leadership constructs. Multiple-group mean and covariance structures (MACS) analyses are used to simultaneously validate the generalizability of the two hypothesized constructs across the 12 cultural groups and to assess whether the similarities and differences discovered are measurement and scaling artifacts or reflect true cross-cultural differences. The data were collected by the author and others as part of a larger international research project. The data are comprised of 2341 managers from 12 countries/regions. The results provide compelling evidence that task orientation and relationship orientation, reliably and validly operationalized through consideration and initiation of structure, are generalizable across the countries/regions sampled. But the results also reveal significant differences in the perception of these behaviors, suggesting that some aspects of task orientation and relationship orientation are strongly affected by cultural influences. These (similarities and) differences reflect directly interpretable, error-free effects among the constructs at the behavioral level. Thus, task orientation and relationship orientation can demonstrate different relations among cultures, yet still be defined equivalently across the 11 cultures studied. The differences found in this study are true differences and may contain information about cultural influences characterizing each cultural context (i.e. group). The nature of such influences should be examined before the results can be meaningfully interpreted. To examine the effects of cultural characteristics on the constructs, additional hypotheses on the constructs' latent parameters can be tested across groups. Construct-level tests are illustrated in hypothetical examples in light of the study's results. The study contributes significantly to the theoretical understanding of the nature and generalizability of psychological constructs. The theoretical and practical implications of embedding context into a unified theory of task orientated and relationship oriented leader behavior are proposed. Limitations and contributions are also discussed. ^
Resumo:
Software engineering researchers are challenged to provide increasingly more powerful levels of abstractions to address the rising complexity inherent in software solutions. One new development paradigm that places models as abstraction at the forefront of the development process is Model-Driven Software Development (MDSD). MDSD considers models as first class artifacts, extending the capability for engineers to use concepts from the problem domain of discourse to specify apropos solutions. A key component in MDSD is domain-specific modeling languages (DSMLs) which are languages with focused expressiveness, targeting a specific taxonomy of problems. The de facto approach used is to first transform DSML models to an intermediate artifact in a HLL e.g., Java or C++, then execute that resulting code.^ Our research group has developed a class of DSMLs, referred to as interpreted DSMLs (i-DSMLs), where models are directly interpreted by a specialized execution engine with semantics based on model changes at runtime. This execution engine uses a layered architecture and is referred to as a domain-specific virtual machine (DSVM). As the domain-specific model being executed descends the layers of the DSVM the semantic gap between the user-defined model and the services being provided by the underlying infrastructure is closed. The focus of this research is the synthesis engine, the layer in the DSVM which transforms i-DSML models into executable scripts for the next lower layer to process.^ The appeal of an i-DSML is constrained as it possesses unique semantics contained within the DSVM. Existing DSVMs for i-DSMLs exhibit tight coupling between the implicit model of execution and the semantics of the domain, making it difficult to develop DSVMs for new i-DSMLs without a significant investment in resources.^ At the onset of this research only one i-DSML had been created for the user- centric communication domain using the aforementioned approach. This i-DSML is the Communication Modeling Language (CML) and its DSVM is the Communication Virtual machine (CVM). A major problem with the CVM's synthesis engine is that the domain-specific knowledge (DSK) and the model of execution (MoE) are tightly interwoven consequently subsequent DSVMs would need to be developed from inception with no reuse of expertise.^ This dissertation investigates how to decouple the DSK from the MoE and subsequently producing a generic model of execution (GMoE) from the remaining application logic. This GMoE can be reused to instantiate synthesis engines for DSVMs in other domains. The generalized approach to developing the model synthesis component of i-DSML interpreters utilizes a reusable framework loosely coupled to DSK as swappable framework extensions.^ This approach involves first creating an i-DSML and its DSVM for a second do- main, demand-side smartgrid, or microgrid energy management, and designing the synthesis engine so that the DSK and MoE are easily decoupled. To validate the utility of the approach, the SEs are instantiated using the GMoE and DSKs of the two aforementioned domains and an empirical study to support our claim of reduced developmental effort is performed.^
Resumo:
The presence of inhibitory substances in biological forensic samples has, and continues to affect the quality of the data generated following DNA typing processes. Although the chemistries used during the procedures have been enhanced to mitigate the effects of these deleterious compounds, some challenges remain. Inhibitors can be components of the samples, the substrate where samples were deposited or chemical(s) associated to the DNA purification step. Therefore, a thorough understanding of the extraction processes and their ability to handle the various types of inhibitory substances can help define the best analytical processing for any given sample. A series of experiments were conducted to establish the inhibition tolerance of quantification and amplification kits using common inhibitory substances in order to determine if current laboratory practices are optimal for identifying potential problems associated with inhibition. DART mass spectrometry was used to determine the amount of inhibitor carryover after sample purification, its correlation to the initial inhibitor input in the sample and the overall effect in the results. Finally, a novel alternative at gathering investigative leads from samples that would otherwise be ineffective for DNA typing due to the large amounts of inhibitory substances and/or environmental degradation was tested. This included generating data associated with microbial peak signatures to identify locations of clandestine human graves. Results demonstrate that the current methods for assessing inhibition are not necessarily accurate, as samples that appear inhibited in the quantification process can yield full DNA profiles, while those that do not indicate inhibition may suffer from lowered amplification efficiency or PCR artifacts. The extraction methods tested were able to remove >90% of the inhibitors from all samples with the exception of phenol, which was present in variable amounts whenever the organic extraction approach was utilized. Although the results attained suggested that most inhibitors produce minimal effect on downstream applications, analysts should practice caution when selecting the best extraction method for particular samples, as casework DNA samples are often present in small quantities and can contain an overwhelming amount of inhibitory substances.
Resumo:
Respiratory gating in lung PET imaging to compensate for respiratory motion artifacts is a current research issue with broad potential impact on quantitation, diagnosis and clinical management of lung tumors. However, PET images collected at discrete bins can be significantly affected by noise as there are lower activity counts in each gated bin unless the total PET acquisition time is prolonged, so that gating methods should be combined with imaging-based motion correction and registration methods. The aim of this study was to develop and validate a fast and practical solution to the problem of respiratory motion for the detection and accurate quantitation of lung tumors in PET images. This included: (1) developing a computer-assisted algorithm for PET/CT images that automatically segments lung regions in CT images, identifies and localizes lung tumors of PET images; (2) developing and comparing different registration algorithms which processes all the information within the entire respiratory cycle and integrate all the tumor in different gated bins into a single reference bin. Four registration/integration algorithms: Centroid Based, Intensity Based, Rigid Body and Optical Flow registration were compared as well as two registration schemes: Direct Scheme and Successive Scheme. Validation was demonstrated by conducting experiments with the computerized 4D NCAT phantom and with a dynamic lung-chest phantom imaged using a GE PET/CT System. Iterations were conducted on different size simulated tumors and different noise levels. Static tumors without respiratory motion were used as gold standard; quantitative results were compared with respect to tumor activity concentration, cross-correlation coefficient, relative noise level and computation time. Comparing the results of the tumors before and after correction, the tumor activity values and tumor volumes were closer to the static tumors (gold standard). Higher correlation values and lower noise were also achieved after applying the correction algorithms. With this method the compromise between short PET scan time and reduced image noise can be achieved, while quantification and clinical analysis become fast and precise.
Resumo:
Seagrass is expected to benefit from increased carbon availability under future ocean acidification. This hypothesis has been little tested by in situ manipulation. To test for ocean acidification effects on seagrass meadows under controlled CO2/pH conditions, we used a Free Ocean Carbon Dioxide Enrichment (FOCE) system which allows for the manipulation of pH as continuous offset from ambient. It was deployed in a Posidonia oceanica meadow at 11 m depth in the Northwestern Mediterranean Sea. It consisted of two benthic enclosures, an experimental and a control unit both 1.7 m**3, and an additional reference plot in the ambient environment (2 m**2) to account for structural artifacts. The meadow was monitored from April to November 2014. The pH of the experimental enclosure was lowered by 0.26 pH units for the second half of the 8-month study. The greatest magnitude of change in P. oceanica leaf biometrics, photosynthesis, and leaf growth accompanied seasonal changes recorded in the environment and values were similar between the two enclosures. Leaf thickness may change in response to lower pH but this requires further testing. Results are congruent with other short-term and natural studies that have investigated the response of P. oceanica over a wide range of pH. They suggest any benefit from ocean acidification, over the next century (at a pH of 7.7 on the total scale), on Posidonia physiology and growth may be minimal and difficult to detect without increased replication or longer experimental duration. The limited stimulation, which did not surpass any enclosure or seasonal effect, casts doubts on speculations that elevated CO2 would confer resistance to thermal stress and increase the buffering capacity of meadows.
Resumo:
Wireless Sensor and Actuator Networks (WSAN) are a key component in Ubiquitous Computing Systems and have many applications in different knowledge domains. Programming for such networks is very hard and requires developers to know the available sensor platforms specificities, increasing the learning curve for developing WSAN applications. In this work, an MDA (Model-Driven Architecture) approach for WSAN applications development called ArchWiSeN is proposed. The goal of such approach is to facilitate the development task by providing: (i) A WSAN domain-specific language, (ii) a methodology for WSAN application development; and (iii) an MDA infrastructure composed of several software artifacts (PIM, PSMs and transformations). ArchWiSeN allows the direct contribution of domain experts in the WSAN application development without the need of specialized knowledge on WSAN platforms and, at the same time, allows network experts to manage the application requirements without the need for specific knowledge of the application domain. Furthermore, this approach also aims to enable developers to express and validate functional and non-functional requirements of the application, incorporate services offered by WSAN middleware platforms and promote reuse of the developed software artifacts. In this sense, this Thesis proposes an approach that includes all WSAN development stages for current and emerging scenarios through the proposed MDA infrastructure. An evaluation of the proposal was performed by: (i) a proof of concept encompassing three different scenarios performed with the usage of the MDA infrastructure to describe the WSAN development process using the application engineering process, (ii) a controlled experiment to assess the use of the proposed approach compared to traditional method of WSAN application development, (iii) the analysis of ArchWiSeN support of middleware services to ensure that WSAN applications using such services can achieve their requirements ; and (iv) systematic analysis of ArchWiSeN in terms of desired characteristics for MDA tool when compared with other existing MDA tools for WSAN.
Resumo:
The key aspect limiting resolution in crosswell traveltime tomography is illumination, a well known result but not as well exemplified. Resolution in the 2D case is revisited using a simple geometric approach based on the angular aperture distribution and the Radon Transform properties. Analitically it is shown that if an interface has dips contained in the angular aperture limits in all points, it is correctly imaged in the tomogram. By inversion of synthetic data this result is confirmed and it is also evidenced that isolated artifacts might be present when the dip is near the illumination limit. In the inverse sense, however, if an interface is interpretable from a tomogram, even an aproximately horizontal interface, there is no guarantee that it corresponds to a true interface. Similarly, if a body is present in the interwell region it is diffusely imaged in the tomogram, but its interfaces - particularly vertical edges - can not be resolved and additional artifacts might be present. Again, in the inverse sense, there is no guarantee that an isolated anomaly corresponds to a true anomalous body because this anomaly can also be an artifact. Jointly, these results state the dilemma of ill-posed inverse problems: absence of guarantee of correspondence to the true distribution. The limitations due to illumination may not be solved by the use of mathematical constraints. It is shown that crosswell tomograms derived by the use of sparsity constraints, using both Discrete Cosine Transform and Daubechies bases, basically reproduces the same features seen in tomograms obtained with the classic smoothness constraint. Interpretation must be done always taking in consideration the a priori information and the particular limitations due to illumination. An example of interpreting a real data survey in this context is also presented.
Resumo:
The key aspect limiting resolution in crosswell traveltime tomography is illumination, a well known result but not as well exemplified. Resolution in the 2D case is revisited using a simple geometric approach based on the angular aperture distribution and the Radon Transform properties. Analitically it is shown that if an interface has dips contained in the angular aperture limits in all points, it is correctly imaged in the tomogram. By inversion of synthetic data this result is confirmed and it is also evidenced that isolated artifacts might be present when the dip is near the illumination limit. In the inverse sense, however, if an interface is interpretable from a tomogram, even an aproximately horizontal interface, there is no guarantee that it corresponds to a true interface. Similarly, if a body is present in the interwell region it is diffusely imaged in the tomogram, but its interfaces - particularly vertical edges - can not be resolved and additional artifacts might be present. Again, in the inverse sense, there is no guarantee that an isolated anomaly corresponds to a true anomalous body because this anomaly can also be an artifact. Jointly, these results state the dilemma of ill-posed inverse problems: absence of guarantee of correspondence to the true distribution. The limitations due to illumination may not be solved by the use of mathematical constraints. It is shown that crosswell tomograms derived by the use of sparsity constraints, using both Discrete Cosine Transform and Daubechies bases, basically reproduces the same features seen in tomograms obtained with the classic smoothness constraint. Interpretation must be done always taking in consideration the a priori information and the particular limitations due to illumination. An example of interpreting a real data survey in this context is also presented.
Resumo:
This research uses scanning electron microscopy-energy dispersive X-ray spectrometry (SEM-EDX) and inductively coupled plasma-mass spectrometry (ICP-MS) on cross-sections of iron artifacts sectioned from along shafts to determine the elemental constituents of a collection of Inuit and European artifacts from along the coast of Labrador. Hand-wrought iron nails from early historic period (16th – 18th centuries CE) Inuit sites in Labrador were originally manufactured by and acquired from early whalers and fishers of various European nationalities. The purpose of this research was to assess if the elements in different samples are sufficiently homogeneous to be viable for a provenience analysis to discern which Inuit nails were originally derived from which European groups; the Basque, English or French. The consistent relationships between the geochemical signatures of iron nails found in Inuit sites and historic nails derived from specific European groups could provide insights into the prevalence, activity and the nature of indigenous interactions of different European nationalities in the region over time. The results show that the methods applied to evaluate the geochemistry of the nails was not sufficient to detect meaningful patterns because the nails did not demonstrate the necessary degree of chemical uniformity among different samples in the same artifacts.
Resumo:
Some dads wear nail polish! Is the tooth fairy a boy or a girl? My Grade Two students’ voices were integral in developing each of the action research cycles as students became co-creators of knowledge. I gathered data through a personal journal, observations, reflections, work samples, interviews and classroom artifacts. The research question was focused on creating a safe and caring classroom environment by selecting appropriate instructional strategies based on developing my students’ concept of gender. Findings included students’ acceptance of differences, emulation of gender stereotypes, the significance of role models and student empowerment. Conclusions examined the influence that behaviour has on instructional strategies, creating allies among primary students, the importance of teacher training and the influence that students have in their classroom. Thoughts towards future research include the need for further parent engagement and more exploration of the impact that the school environment has in the classroom.
Resumo:
The water masses in the Florida Straits and Bahamas region are important sources for the Northern Atlantic surface ocean circulation. In this study, we analyse carbonate preservation in surface sediments located above the chemical lysocline in the Florida Straits and Bahamas region and discuss possible reasons for supralysoclinal dissolution. Calcite dissolution proxies such as the variation of the foraminiferal assemblage, Fragmentation Index, Benthic Foraminifera Index, and Resistance Index displayed a good preservation in both areas. The pteropod species Limacina inflata showed very good preservation in sediments of inter-platform channels from the Great Bahama Bank (Providence Channel, Exuma Sound) above the aragonite lysocline. Supralysoclinal aragonite dissolution, however, was observed at two water depth levels (800-1000 m and below 1500 m) in the Florida Straits. Our observations suggest that the supralysoclinal dissolution in the Florida Straits is due to the degradation of organic material. The presence of Antarctic Intermediate Water (AAIW) may be a contributing factor for the significant aragonite dissolution in 800-1000 m. The comparison of modern preservation patterns of the surface sediments with hydrographical measurements shows that the L. inflata Dissolution Index (LDX) might be an adequate proxy to reconstruct paleo-water mass conditions in an area which is highly saturated with respect to calcium carbonate.
Resumo:
This study aims to reflect on the cultural and aesthetics formation of teacher, and how they act as a co-author in the virtual environment. The reflection permeates on topics such as: The Information Age and the challenges of the teacher in the twenty-first century; teacher professional development and the multiple personal identities; the importance of cultural repertoire for teaching practice and the role of the teacher as co-author in the informational network. In addition to the theoretical reflections were conducted two focus groups with the participation of teachers with different graduations and working in various levels of education to discuss the presented topics. It was proposed to create a website for the teacher to sail and collaborate with posts on many cultural artifacts. For the construction of the content for the site, it developed a “daily creation”, which is the search information storage from multiple sources, such as academic papers, songs, websites, magazines, comics, blogs, books, photographs, advertisements, finally everything that can increase imagination and serve for the production of content. Data analysis of the focus group showed that teachers recognize the varied cultural repertoire makes for a good professional aplomb, but cannot actively participate in the local culture. It was also found that the teacher has difficulty manipulating information technology and communication and therefore, distance themselves from an active production network. The focus group contributed to building a website and posts on cultural artifacts. Production can be accessed at www.digitaldoprofessor.com.
Resumo:
Inscription: Verso: New York.
Resumo:
Inscription: Verso: women's rights demonstration Bryant Park, New York.
Resumo:
Inscription: Verso: International Women's Day march, New York.