914 resultados para Dynamic data analysis
Resumo:
This research aimed to know and analyze the pedagogical practices that have been developed in the teaching and learning of students with Intellectual Disability (DI), enrolled at common class of elementary school I. The study was conducted in a public school at Natal/RN, involving two students with DI, a multipurpose teacher, a teaching assistant, a teacher of arts and educational coordinator. As for methodological choice, we chose to develop a qualitative study, undertaking a case study. As tools for the construction of the data we use: semi-structured interviews, participant observation, field diary and document analysis. Data analysis reveals that the institution in which the research was undertaken gradually implementing changes in order to develop an inclusive practice, consistent with its assumptions. Regarding the practices developed in the teaching and learning of students with intellectual disabilities, it was possible to realize the fulfillment of certain adjustments in relation to the objectives, activities and some content, involving the use of resources and varied strategies. With regard to educational activities, we found that these had different levels of complexity, covering both basic goals as more complex objectives. From the observations, we realize that the Assistant Professor of mediations during varied activities as challenging tool in intellectual processes. We note, too, a dynamic classroom in which disabled students were under the guidance of Assistant Professor, and other students with all-round teacher who had a fairly traditional teaching methodology. It created thus an isolation situation, since there was no proposition practices to be developed with all students, and interaction among classmates, generally quite restricted. Although were highlighted developments in the social and academic learning of the surveyed students, the teachers said they did not feel prepared to work freight inclusion. The study reveals the need for teachers reviewing some actions undertaken, in order to develop more democratic pedagogical practices of education, stimulating the interactions between students, by proposing challenging activities that promote the formation and concepts. In addition, it points to the need of the education system invest and encourage the qualification of teachers with regard to education in an inclusive perspective, through actions that promote lifelong learning. It needs to be developed on the teacher a reflective attitude, resulting in a view that due diligence must be entered in practice inherent in teaching in order to use to enhance their educational experience.
Resumo:
The social media classification problems draw more and more attention in the past few years. With the rapid development of Internet and the popularity of computers, there is astronomical amount of information in the social network (social media platforms). The datasets are generally large scale and are often corrupted by noise. The presence of noise in training set has strong impact on the performance of supervised learning (classification) techniques. A budget-driven One-class SVM approach is presented in this thesis that is suitable for large scale social media data classification. Our approach is based on an existing online One-class SVM learning algorithm, referred as STOCS (Self-Tuning One-Class SVM) algorithm. To justify our choice, we first analyze the noise-resilient ability of STOCS using synthetic data. The experiments suggest that STOCS is more robust against label noise than several other existing approaches. Next, to handle big data classification problem for social media data, we introduce several budget driven features, which allow the algorithm to be trained within limited time and under limited memory requirement. Besides, the resulting algorithm can be easily adapted to changes in dynamic data with minimal computational cost. Compared with two state-of-the-art approaches, Lib-Linear and kNN, our approach is shown to be competitive with lower requirements of memory and time.
Resumo:
Abstract
The goal of modern radiotherapy is to precisely deliver a prescribed radiation dose to delineated target volumes that contain a significant amount of tumor cells while sparing the surrounding healthy tissues/organs. Precise delineation of treatment and avoidance volumes is the key for the precision radiation therapy. In recent years, considerable clinical and research efforts have been devoted to integrate MRI into radiotherapy workflow motivated by the superior soft tissue contrast and functional imaging possibility. Dynamic contrast-enhanced MRI (DCE-MRI) is a noninvasive technique that measures properties of tissue microvasculature. Its sensitivity to radiation-induced vascular pharmacokinetic (PK) changes has been preliminary demonstrated. In spite of its great potential, two major challenges have limited DCE-MRI’s clinical application in radiotherapy assessment: the technical limitations of accurate DCE-MRI imaging implementation and the need of novel DCE-MRI data analysis methods for richer functional heterogeneity information.
This study aims at improving current DCE-MRI techniques and developing new DCE-MRI analysis methods for particular radiotherapy assessment. Thus, the study is naturally divided into two parts. The first part focuses on DCE-MRI temporal resolution as one of the key DCE-MRI technical factors, and some improvements regarding DCE-MRI temporal resolution are proposed; the second part explores the potential value of image heterogeneity analysis and multiple PK model combination for therapeutic response assessment, and several novel DCE-MRI data analysis methods are developed.
I. Improvement of DCE-MRI temporal resolution. First, the feasibility of improving DCE-MRI temporal resolution via image undersampling was studied. Specifically, a novel MR image iterative reconstruction algorithm was studied for DCE-MRI reconstruction. This algorithm was built on the recently developed compress sensing (CS) theory. By utilizing a limited k-space acquisition with shorter imaging time, images can be reconstructed in an iterative fashion under the regularization of a newly proposed total generalized variation (TGV) penalty term. In the retrospective study of brain radiosurgery patient DCE-MRI scans under IRB-approval, the clinically obtained image data was selected as reference data, and the simulated accelerated k-space acquisition was generated via undersampling the reference image full k-space with designed sampling grids. Two undersampling strategies were proposed: 1) a radial multi-ray grid with a special angular distribution was adopted to sample each slice of the full k-space; 2) a Cartesian random sampling grid series with spatiotemporal constraints from adjacent frames was adopted to sample the dynamic k-space series at a slice location. Two sets of PK parameters’ maps were generated from the undersampled data and from the fully-sampled data, respectively. Multiple quantitative measurements and statistical studies were performed to evaluate the accuracy of PK maps generated from the undersampled data in reference to the PK maps generated from the fully-sampled data. Results showed that at a simulated acceleration factor of four, PK maps could be faithfully calculated from the DCE images that were reconstructed using undersampled data, and no statistically significant differences were found between the regional PK mean values from undersampled and fully-sampled data sets. DCE-MRI acceleration using the investigated image reconstruction method has been suggested as feasible and promising.
Second, for high temporal resolution DCE-MRI, a new PK model fitting method was developed to solve PK parameters for better calculation accuracy and efficiency. This method is based on a derivative-based deformation of the commonly used Tofts PK model, which is presented as an integrative expression. This method also includes an advanced Kolmogorov-Zurbenko (KZ) filter to remove the potential noise effect in data and solve the PK parameter as a linear problem in matrix format. In the computer simulation study, PK parameters representing typical intracranial values were selected as references to simulated DCE-MRI data for different temporal resolution and different data noise level. Results showed that at both high temporal resolutions (<1s) and clinically feasible temporal resolution (~5s), this new method was able to calculate PK parameters more accurate than the current calculation methods at clinically relevant noise levels; at high temporal resolutions, the calculation efficiency of this new method was superior to current methods in an order of 102. In a retrospective of clinical brain DCE-MRI scans, the PK maps derived from the proposed method were comparable with the results from current methods. Based on these results, it can be concluded that this new method can be used for accurate and efficient PK model fitting for high temporal resolution DCE-MRI.
II. Development of DCE-MRI analysis methods for therapeutic response assessment. This part aims at methodology developments in two approaches. The first one is to develop model-free analysis method for DCE-MRI functional heterogeneity evaluation. This approach is inspired by the rationale that radiotherapy-induced functional change could be heterogeneous across the treatment area. The first effort was spent on a translational investigation of classic fractal dimension theory for DCE-MRI therapeutic response assessment. In a small-animal anti-angiogenesis drug therapy experiment, the randomly assigned treatment/control groups received multiple fraction treatments with one pre-treatment and multiple post-treatment high spatiotemporal DCE-MRI scans. In the post-treatment scan two weeks after the start, the investigated Rényi dimensions of the classic PK rate constant map demonstrated significant differences between the treatment and the control groups; when Rényi dimensions were adopted for treatment/control group classification, the achieved accuracy was higher than the accuracy from using conventional PK parameter statistics. Following this pilot work, two novel texture analysis methods were proposed. First, a new technique called Gray Level Local Power Matrix (GLLPM) was developed. It intends to solve the lack of temporal information and poor calculation efficiency of the commonly used Gray Level Co-Occurrence Matrix (GLCOM) techniques. In the same small animal experiment, the dynamic curves of Haralick texture features derived from the GLLPM had an overall better performance than the corresponding curves derived from current GLCOM techniques in treatment/control separation and classification. The second developed method is dynamic Fractal Signature Dissimilarity (FSD) analysis. Inspired by the classic fractal dimension theory, this method measures the dynamics of tumor heterogeneity during the contrast agent uptake in a quantitative fashion on DCE images. In the small animal experiment mentioned before, the selected parameters from dynamic FSD analysis showed significant differences between treatment/control groups as early as after 1 treatment fraction; in contrast, metrics from conventional PK analysis showed significant differences only after 3 treatment fractions. When using dynamic FSD parameters, the treatment/control group classification after 1st treatment fraction was improved than using conventional PK statistics. These results suggest the promising application of this novel method for capturing early therapeutic response.
The second approach of developing novel DCE-MRI methods is to combine PK information from multiple PK models. Currently, the classic Tofts model or its alternative version has been widely adopted for DCE-MRI analysis as a gold-standard approach for therapeutic response assessment. Previously, a shutter-speed (SS) model was proposed to incorporate transcytolemmal water exchange effect into contrast agent concentration quantification. In spite of richer biological assumption, its application in therapeutic response assessment is limited. It might be intriguing to combine the information from the SS model and from the classic Tofts model to explore potential new biological information for treatment assessment. The feasibility of this idea was investigated in the same small animal experiment. The SS model was compared against the Tofts model for therapeutic response assessment using PK parameter regional mean value comparison. Based on the modeled transcytolemmal water exchange rate, a biological subvolume was proposed and was automatically identified using histogram analysis. Within the biological subvolume, the PK rate constant derived from the SS model were proved to be superior to the one from Tofts model in treatment/control separation and classification. Furthermore, novel biomarkers were designed to integrate PK rate constants from these two models. When being evaluated in the biological subvolume, this biomarker was able to reflect significant treatment/control difference in both post-treatment evaluation. These results confirm the potential value of SS model as well as its combination with Tofts model for therapeutic response assessment.
In summary, this study addressed two problems of DCE-MRI application in radiotherapy assessment. In the first part, a method of accelerating DCE-MRI acquisition for better temporal resolution was investigated, and a novel PK model fitting algorithm was proposed for high temporal resolution DCE-MRI. In the second part, two model-free texture analysis methods and a multiple-model analysis method were developed for DCE-MRI therapeutic response assessment. The presented works could benefit the future DCE-MRI routine clinical application in radiotherapy assessment.
Resumo:
Thermodynamic stability measurements on proteins and protein-ligand complexes can offer insights not only into the fundamental properties of protein folding reactions and protein functions, but also into the development of protein-directed therapeutic agents to combat disease. Conventional calorimetric or spectroscopic approaches for measuring protein stability typically require large amounts of purified protein. This requirement has precluded their use in proteomic applications. Stability of Proteins from Rates of Oxidation (SPROX) is a recently developed mass spectrometry-based approach for proteome-wide thermodynamic stability analysis. Since the proteomic coverage of SPROX is fundamentally limited by the detection of methionine-containing peptides, the use of tryptophan-containing peptides was investigated in this dissertation. A new SPROX-like protocol was developed that measured protein folding free energies using the denaturant dependence of the rate at which globally protected tryptophan and methionine residues are modified with dimethyl (2-hydroxyl-5-nitrobenzyl) sulfonium bromide and hydrogen peroxide, respectively. This so-called Hybrid protocol was applied to proteins in yeast and MCF-7 cell lysates and achieved a ~50% increase in proteomic coverage compared to probing only methionine-containing peptides. Subsequently, the Hybrid protocol was successfully utilized to identify and quantify both known and novel protein-ligand interactions in cell lysates. The ligands under study included the well-known Hsp90 inhibitor geldanamycin and the less well-understood omeprazole sulfide that inhibits liver-stage malaria. In addition to protein-small molecule interactions, protein-protein interactions involving Puf6 were investigated using the SPROX technique in comparative thermodynamic analyses performed on wild-type and Puf6-deletion yeast strains. A total of 39 proteins were detected as Puf6 targets and 36 of these targets were previously unknown to interact with Puf6. Finally, to facilitate the SPROX/Hybrid data analysis process and minimize human errors, a Bayesian algorithm was developed for transition midpoint assignment. In summary, the work in this dissertation expanded the scope of SPROX and evaluated the use of SPROX/Hybrid protocols for characterizing protein-ligand interactions in complex biological mixtures.
Resumo:
Energy efficiency and user comfort have recently become priorities in the Facility Management (FM) sector. This has resulted in the use of innovative building components, such as thermal solar panels, heat pumps, etc., as they have potential to provide better performance, energy savings and increased user comfort. However, as the complexity of components increases, the requirement for maintenance management also increases. The standard routine for building maintenance is inspection which results in repairs or replacement when a fault is found. This routine leads to unnecessary inspections which have a cost with respect to downtime of a component and work hours. This research proposes an alternative routine: performing building maintenance at the point in time when the component is degrading and requires maintenance, thus reducing the frequency of unnecessary inspections. This thesis demonstrates that statistical techniques can be used as part of a maintenance management methodology to invoke maintenance before failure occurs. The proposed FM process is presented through a scenario utilising current Building Information Modelling (BIM) technology and innovative contractual and organisational models. This FM scenario supports a Degradation based Maintenance (DbM) scheduling methodology, implemented using two statistical techniques, Particle Filters (PFs) and Gaussian Processes (GPs). DbM consists of extracting and tracking a degradation metric for a component. Limits for the degradation metric are identified based on one of a number of proposed processes. These processes determine the limits based on the maturity of the historical information available. DbM is implemented for three case study components: a heat exchanger; a heat pump; and a set of bearings. The identified degradation points for each case study, from a PF, a GP and a hybrid (PF and GP combined) DbM implementation are assessed against known degradation points. The GP implementations are successful for all components. For the PF implementations, the results presented in this thesis find that the extracted metrics and limits identify degradation occurrences accurately for components which are in continuous operation. For components which have seasonal operational periods, the PF may wrongly identify degradation. The GP performs more robustly than the PF, but the PF, on average, results in fewer false positives. The hybrid implementations, which are a combination of GP and PF results, are successful for 2 of 3 case studies and are not affected by seasonal data. Overall, DbM is effectively applied for the three case study components. The accuracy of the implementations is dependant on the relationships modelled by the PF and GP, and on the type and quantity of data available. This novel maintenance process can improve equipment performance and reduce energy wastage from BSCs operation.
Resumo:
Bayesian nonparametric models, such as the Gaussian process and the Dirichlet process, have been extensively applied for target kinematics modeling in various applications including environmental monitoring, traffic planning, endangered species tracking, dynamic scene analysis, autonomous robot navigation, and human motion modeling. As shown by these successful applications, Bayesian nonparametric models are able to adjust their complexities adaptively from data as necessary, and are resistant to overfitting or underfitting. However, most existing works assume that the sensor measurements used to learn the Bayesian nonparametric target kinematics models are obtained a priori or that the target kinematics can be measured by the sensor at any given time throughout the task. Little work has been done for controlling the sensor with bounded field of view to obtain measurements of mobile targets that are most informative for reducing the uncertainty of the Bayesian nonparametric models. To present the systematic sensor planning approach to leaning Bayesian nonparametric models, the Gaussian process target kinematics model is introduced at first, which is capable of describing time-invariant spatial phenomena, such as ocean currents, temperature distributions and wind velocity fields. The Dirichlet process-Gaussian process target kinematics model is subsequently discussed for modeling mixture of mobile targets, such as pedestrian motion patterns.
Novel information theoretic functions are developed for these introduced Bayesian nonparametric target kinematics models to represent the expected utility of measurements as a function of sensor control inputs and random environmental variables. A Gaussian process expected Kullback Leibler divergence is developed as the expectation of the KL divergence between the current (prior) and posterior Gaussian process target kinematics models with respect to the future measurements. Then, this approach is extended to develop a new information value function that can be used to estimate target kinematics described by a Dirichlet process-Gaussian process mixture model. A theorem is proposed that shows the novel information theoretic functions are bounded. Based on this theorem, efficient estimators of the new information theoretic functions are designed, which are proved to be unbiased with the variance of the resultant approximation error decreasing linearly as the number of samples increases. Computational complexities for optimizing the novel information theoretic functions under sensor dynamics constraints are studied, and are proved to be NP-hard. A cumulative lower bound is then proposed to reduce the computational complexity to polynomial time.
Three sensor planning algorithms are developed according to the assumptions on the target kinematics and the sensor dynamics. For problems where the control space of the sensor is discrete, a greedy algorithm is proposed. The efficiency of the greedy algorithm is demonstrated by a numerical experiment with data of ocean currents obtained by moored buoys. A sweep line algorithm is developed for applications where the sensor control space is continuous and unconstrained. Synthetic simulations as well as physical experiments with ground robots and a surveillance camera are conducted to evaluate the performance of the sweep line algorithm. Moreover, a lexicographic algorithm is designed based on the cumulative lower bound of the novel information theoretic functions, for the scenario where the sensor dynamics are constrained. Numerical experiments with real data collected from indoor pedestrians by a commercial pan-tilt camera are performed to examine the lexicographic algorithm. Results from both the numerical simulations and the physical experiments show that the three sensor planning algorithms proposed in this dissertation based on the novel information theoretic functions are superior at learning the target kinematics with
little or no prior knowledge
Resumo:
Advanced Placement is a series of courses and tests designed to determine mastery over introductory college material. It has become part of the American educational system. The changing conception of AP was examined using critical theory to determine what led to a view of continual success. The study utilized David Armstrong’s variation of Michel Foucault’s critical theory to construct an analytical framework. Black and Ubbes’ data gathering techniques and Braun and Clark’s data analysis were utilized as the analytical framework. Data included 1135 documents: 641 journal articles, 421 newspaper articles and 82 government documents. The study revealed three historical ruptures correlated to three themes containing subthemes. The first rupture was the Sputnik launch in 1958. Its correlated theme was AP leading to school reform with subthemes of AP as reform for able students and AP’s gaining of acceptance from secondary schools and higher education. The second rupture was the Nation at Risk report published in 1983. Its correlated theme was AP’s shift in emphasis from the exam to the course with the subthemes of AP as a course, a shift in AP’s target population, using AP courses to promote equity, and AP courses modifying curricula. The passage of the No Child Left Behind Act of 2001 was the third rupture. Its correlated theme was AP as a means to narrow the achievement gap with the subthemes of AP as a college preparatory program and the shifting of AP to an open access program. The themes revealed a perception that progressively integrated the program into American education. The AP program changed emphasis from tests to curriculum, and is seen as the nation’s premier academic program to promote reform and prepare students for college. It has become a major source of income for the College Board. In effect, AP has become an agent of privatization, spurring other private entities into competition for government funding. The change and growth of the program over the past 57 years resulted in a deep integration into American education. As such the program remains an intrinsic part of the system and continues to evolve within American education.
Resumo:
The necessity of elemental analysis techniques to solve forensic problems continues to expand as the samples collected from crime scenes grow in complexity. Laser ablation ICP-MS (LA-ICP-MS) has been shown to provide a high degree of discrimination between samples that originate from different sources. In the first part of this research, two laser ablation ICP-MS systems were compared, one using a nanosecond laser and another a femtosecond laser source for the forensic analysis of glass. The results showed that femtosecond LA-ICP-MS did not provide significant improvements in terms of accuracy, precision and discrimination, however femtosecond LA-ICP-MS did provide lower detection limits. In addition, it was determined that even for femtosecond LA-ICP-MS an internal standard should be utilized to obtain accurate analytical results for glass analyses. In the second part, a method using laser induced breakdown spectroscopy (LIBS) for the forensic analysis of glass was shown to provide excellent discrimination for a glass set consisting of 41 automotive fragments. The discrimination power was compared to two of the leading elemental analysis techniques, µXRF and LA-ICP-MS, and the results were similar; all methods generated >99% discrimination and the pairs found indistinguishable were similar. An extensive data analysis approach for LIBS glass analyses was developed to minimize Type I and II errors en route to a recommendation of 10 ratios to be used for glass comparisons. Finally, a LA-ICP-MS method for the qualitative analysis and discrimination of gel ink sources was developed and tested for a set of ink samples. In the first discrimination study, qualitative analysis was used to obtain 95.6% discrimination for a blind study consisting of 45 black gel ink samples provided by the United States Secret Service. A 0.4% false exclusion (Type I) error rate and a 3.9% false inclusion (Type II) error rate was obtained for this discrimination study. In the second discrimination study, 99% discrimination power was achieved for a black gel ink pen set consisting of 24 self collected samples. The two pairs found to be indistinguishable came from the same source of origin (the same manufacturer and type of pen purchased in different locations). It was also found that gel ink from the same pen, regardless of the age, was indistinguishable as were gel ink pens (four pens) originating from the same pack.
Resumo:
Las reflexiones metodológicas sobre grupos focalizados (GF) de este artículo tienen como punto de partida una investigación con sectores medios del Área Metropolitana de Buenos Aires. El estudio de referencia aborda los discursos y prácticas de cuidado de la salud en el escenario contemporáneo caracterizado por la diversificación de especialistas, la creciente cobertura mediática de recomendaciones sobre la vida sana y el bienestar, la implementación de políticas públicas de promoción de la salud, y el crecimiento de la industria de productos y servicios vinculados con la temática. El objetivo del artículo es reflexionar, a partir de nuestra experiencia de investigación, sobre dos aspectos que han recibido especial atención en la literatura metodológica más reciente: los criterios para componer los grupos y sus consecuencias para la dinámica de las conversaciones grupales, y las estrategias para dar cuenta de la interacción grupal en el análisis de los datos. En este último eje exploramos el potencial de los GF para observar el trabajo identitario vinculado con el cuidado de la salud. Enmarcamos nuestro estudio y las decisiones metodológicas tomadas en los debates actuales sobre la variedad de usos de los GF.
Resumo:
This work examines independence in the Canadian justice system using an approach adapted from new legal realist scholarship called ‘dynamic realism’. This approach proposes that issues in law must be considered in relation to their recursive and simultaneous development with historic, social and political events. Such events describe ‘law in action’ and more holistically demonstrate principles like independence, rule of law and access to justice. My dynamic realist analysis of independence in the justice system employs a range methodological tools and approaches from the social sciences, including: historical and historiographical study; public administrative; policy and institutional analysis; an empirical component; as well as constitutional, statutory interpretation and jurisprudential analysis. In my view, principles like independence represent aspirational ideals in law which can be better understood by examining how they manifest in legal culture and in the legal system. This examination focuses on the principle and practice of independence for both lawyers and judges in the justice system, but highlights the independence of the Bar. It considers the inter-relation between lawyer independence and the ongoing refinement of judicial independence in Canadian law. It also considers both independence of the Bar and the Judiciary in the context of the administration of justice, and practically illustrates the interaction between these principles through a case study of a specific aspect of the court system. This work also focuses on recent developments in the principle of Bar independence and its relation to an emerging school of professionalism scholarship in Canada. The work concludes by describing the principle of independence as both conditional and dynamic, but rooted in a unitary concept for both lawyers and judges. In short, independence can be defined as impartiality, neutrality and autonomy of legal decision-makers in the justice system to apply, protect and improve the law for what has become its primary normative purpose: facilitating access to justice. While both independence of the Bar and the Judiciary are required to support access to independent courts, some recent developments suggest the practical interactions between independence and access need to be the subject of further research, to better account for both the principles and the practicalities of the Canadian justice system.
Resumo:
Chemical Stratigraphy, or the study of the variation of chemical elements within sedimentary sequences, has gradually become an experienced tool in the research and correlation of global geologic events. In this paper 87Sr/ 86Sr ratios of the Triassic marine carbonates (Muschelkalk facies) of southeast Iberian Ranges, Iberian Peninsula, are presented and the representative Sr-isotopic curve constructed for the upper Ladinian interval. The studied stratigraphic succession is 102 meters thick, continuous, and well preserved. Previous paleontological data from macro and micro, ammonites, bivalves, foraminifera, conodonts and palynological assemblages, suggest a Fassanian-Longobardian age (Late Ladinian). Although diagenetic minerals are present in small amounts, the elemental data content of bulk carbonate samples, especially Sr contents, show a major variation that probably reflects palaeoenvironmental changes. The 87Sr/86Sr ratios curve shows a rise from 0.707649 near the base of the section to 0.707741 and then declines rapidly to 0.707624, with a final values rise up to 0.70787 in the upper part. The data up to meter 80 in the studied succession is broadly concurrent with 87Sr/86Sr ratios of sequences of similar age and complements these data. Moreover, the sequence stratigraphic framework and its key surfaces, which are difficult to be recognised just based in the facies analysis, are characterised by combining variations of the Ca, Mg, Mn, Sr and CaCO3 contents
Resumo:
The study of the Upper Jurassic-Lower Cretaceous deposits (Higueruelas, Villar del Arzobispo and Aldea de Cortés Formations) of the South Iberian Basin (NW Valencia, Spain) reveals new stratigraphic and sedimentological data, which have significant implications on the stratigraphic framework, depositional environments and age of these units. The Higueruelas Fm was deposited in a mid-inner carbonate platform where oncolitic bars migrated by the action of storms and where oncoid production progressively decreased towards the uppermost part of the unit. The overlying Villar del Arzobispo Fm has been traditionally interpreted as an inner platform-lagoon evolving into a tidal-flat. Here it is interpreted as an inner-carbonate platform affected by storms, where oolitic shoals protected a lagoon, which had siliciclastic inputs from the continent. The Aldea de Cortés Fm has been previously interpreted as a lagoon surrounded by tidal-flats and fluvial-deltaic plains. Here it is reinterpreted as a coastal wetland where siliciclastic muddy deposits interacted with shallow fresh to marine water bodies, aeolian dunes and continental siliciclastic inputs. The contact between the Higueruelas and Villar del Arzobispo Fms, classically defined as gradual, is also interpreted here as rapid. More importantly, the contact between the Villar del Arzobispo and Aldea de Cortés Fms, previously considered as unconformable, is here interpreted as gradual. The presence of Alveosepta in the Villar del Arzobispo Fm suggests that at least part of this unit is Kimmeridgian, unlike the previously assigned Late Tithonian-Middle Berriasian age. Consequently, the underlying Higueruelas Fm, previously considered Tithonian, should not be younger than Kimmeridgian. Accordingly, sedimentation of the Aldea de Cortés Fm, previously considered Valangian-Hauterivian, probably started during the Tithonian and it may be considered part of the regressive trend of the Late Jurassic-Early Cretaceous cycle. This is consistent with the dinosaur faunas, typically Jurassic, described in the Villar del Arzobispo and Aldea de Cortés Fms.
Resumo:
Nano-scale touch screen thin film have not been thoroughly investigated in terms of dynamic impact analysis under various strain rates. This research is focused on two different thin films, Zinc Oxide (ZnO) film and Indium Tin Oxide (ITO) film, deposited on Polyethylene Terephthalate (PET) substrate for the standard touch screen panels. Dynamic Mechanical Analysis (DMA) was performed on the ZnO film coated PET substrates. Nano-impact (fatigue) testing was performed on ITO film coated PET substrates. Other analysis includes hardness and the elastic modulus measurements, atomic force microscopy (AFM), Fourier Transform Infrared Spectroscopy (FTIR) and the Scanning Electron Microscopy (SEM) of the film surface.
Ten delta of DMA is described as the ratio of loss modulus (viscous properties) and storage modulus (elastic properties) of the material and its peak against time identifies the glass transition temperature (Tg). Thus, in essence the Tg recognizes changes from glassy to rubber state of the material and for our sample ZnO film, Tg was found as 388.3 K. The DMA results also showed that the Ten delta curve for Tg increases monotonically in the viscoelastic state (before Tg) and decreases sharply in the rubber state (after Tg) until recrystallization of ZnO takes place. This led to an interpretation that enhanced ductility can be achieved by negating the strength of the material.
For the nano-impact testing using the ITO coated PET, the damage started with the crack initiation and propagation. The interpretation of the nano-impact results depended on the characteristics of the loading history. Under the nano-impact loading, the surface structure of ITO film suffered from several forms of failure damages that range from deformation to catastrophic failures. It is concluded that in such type of application, the films should have low residual stress to prevent deformation, good adhesive strength, durable and good resistance to wear.
Resumo:
Background There is increasing interest in how culture may affect the quality of healthcare services, and previous research has shown that ‘treatment culture’—of which there are three categories (resident centred, ambiguous and traditional)—in a nursing home may influence prescribing of psychoactive medications. Objective The objective of this study was to explore and understand treatment culture in prescribing of psychoactive medications for older people with dementia in nursing homes. Method Six nursing homes—two from each treatment culture category—participated in this study. Qualitative data were collected through semi-structured interviews with nursing home staff and general practitioners (GPs), which sought to determine participants’ views on prescribing and administration of psychoactive medication, and their understanding of treatment culture and its potential influence on prescribing of psychoactive drugs. Following verbatim transcription, the data were analysed and themes were identified, facilitated by NVivo and discussion within the research team. Results Interviews took place with five managers, seven nurses, 13 care assistants and two GPs. Four themes emerged: the characteristics of the setting, the characteristics of the individual, relationships and decision making. The characteristics of the setting were exemplified by views of the setting, daily routines and staff training. The characteristics of the individual were demonstrated by views on the personhood of residents and staff attitudes. Relationships varied between staff within and outside the home. These relationships appeared to influence decision making about prescribing of medications. The data analysis found that each home exhibited traits that were indicative of its respective assigned treatment culture. Conclusion Nursing home treatment culture appeared to be influenced by four main themes. Modification of these factors may lead to a shift in culture towards a more flexible, resident-centred culture and a reduction in prescribing and use of psychoactive medication.
Resumo:
Tide gauge data are identified as legacy data given the radical transition between observation method and required output format associated with tide gauges over the 20th-century. Observed water level variation through tide-gauge records is regarded as the only significant basis for determining recent historical variation (decade to century) in mean sea-level and storm surge. There are limited tide gauge records that cover the 20th century, such that the Belfast (UK) Harbour tide gauge would be a strategic long-term (110 years) record, if the full paper-based records (marigrams) were digitally restructured to allow for consistent data analysis. This paper presents the methodology of extracting a consistent time series of observed water levels from the 5 different Belfast Harbour tide gauges’ positions/machine types, starting late 1901. Tide-gauge data was digitally retrieved from the original analogue (daily) records by scanning the marigrams and then extracting the sequential tidal elevations with graph-line seeking software (Ungraph™). This automation of signal extraction allowed the full Belfast series to be retrieved quickly, relative to any manual x–y digitisation of the signal. Restructuring variably lengthed tidal data sets to a consistent daily, monthly and annual file format was undertaken by project-developed software: Merge&Convert and MergeHYD allow consistent water level sampling both at 60 min (past standard) and 10 min intervals, the latter enhancing surge measurement. Belfast tide-gauge data have been rectified, validated and quality controlled (IOC 2006 standards). The result is a consistent annual-based legacy data series for Belfast Harbour that includes over 2 million tidal-level data observations.