882 resultados para mixed stock analysis
Resumo:
Laminated sediments spanning the last 20,000 years (though not continuously) in the Shaban Deep, a brine-filled basin in the northern Red Sea, were analyzed microscopically and with backscattered electron imagery in order to determine laminae composition with emphasis on the diatomaceous component. Based on this detailed study, we present schematic models to propose paleoflux scenarios for laminae formation at different time-slices. The investigated core (GeoB 5836-2; 26°12.61'N, 35°21.56'E; water depth 1475 m) shows light and dark alternating laminae that are easily distinguishable in the mid-Holocene and at the end of the deglaciation (13-15 ka) period. Light layers are mainly composed of coccoliths, terrigenous material and diatom fragments, while dark layers consist almost exclusively of diatom frustules (monospecific or mixed assemblages). The regularity in the occurrence of coccolith/diatom couplets points to an annual deposition cycle where contrasting seasons and associated plankton blooms are represented (diatoms-fall/winter deposition, coccoliths-summer signal). We propose that, for the past ~15,000 years, the laminations represent two-season annual varves. Strong dissolution of carbonate, with the concomitant loss of the coccolith-rich layer in sediments older than 15 ka, prevents us from presenting a schematic model of annual deposition. However, the diatomaceous component reveals a marked switch in species composition between Last Glacial Maximum (LGM) sediments (dominated by Chaetoceros resting spores) and sediments somewhat younger (18-19 ka; dominated by Rhizosolenia). We propose that different diatom assemblages reflect changing conditions in stratification in the northern Red Sea: Strong stratification conditions, such as during two meltwater pulses at 14.5 and 11.4 ka, are reflected in the sediment by Rhizosolenia layers, while Chaetoceros-dominated assemblages represent deep convection conditions.
Resumo:
In order to examine the spatial distribution of organic-walled dinoflagellate cysts (dinocysts) in recent sediments related to environmental conditions in the water column, thirty-two surface sediment samples from the NW African upwelling region (20-32°N) were investigated. Relative abundances of the dinocyst species show distinct regional differences allowing the separation of four hydrographic regimes. (1) In the area off Cape Ghir, which is characterized by most seasonal upwelling and river discharge, Lingulodinium machaerophorum strongly dominates the associations which are additionally characterized by cysts of Gymnodinium nolleri, cysts of Polykrikos kofoidii and cysts of Polykrikos schwartzii. (2) Off Cape Yubi, a region with increasing perennial upwelling, L. machaerophorum, Brigantedinium spp., species of the genus Impagidinium and cysts of Protoperidinium stellatum occur in highest relative abundances. (3) In coastal samples between Cape Ghir and Cape Yubi, Gymnodinium catenatum, species of the genus Impagidinium, Nematosphaeropsis labyrinthus, Operculodinium centrocarpum, cysts of P. stellatum and Selenopemphix nephroides determine the species composition. (4) Off Cape Blanc, where upwelling prevails perennially, and at offshore sites, heterotrophic dinocyst species show highest relative abundances. A Redundancy Analysis reveals fluvial mud, sea surface temperature and the depth of the mixed layer in boreal spring (spring) as the most important parameters relating to the dinocyst species association. Dinocyst accumulation rates were calculated for a subset of samples using well-constrained sedimentation rates. Highest accumulation rates with up to almost 80.000 cysts cm**-2 ky**-1 were found off Cape Ghir and Cape Yubi reflecting their eutrophic upwelling filaments. A Redundancy Analysis gives evidence that primary productivity and the input of fluvial mud are mostly related to the dinocyst association. By means of accumulation rate data, quantitative cyst production of individual species can be considered independently from the rest of the association, allowing autecological interpretations. We show that a combined interpretation of relative abundances and accumulation rates of dinocysts can lead to a better understanding of the productivity conditions off NW Africa.
Resumo:
Two years of harmonized aerosol number size distribution data from 24 European field monitoring sites have been analysed. The results give a comprehensive overview of the European near surface aerosol particle number concentrations and number size distributions between 30 and 500 nm of dry particle diameter. Spatial and temporal distribution of aerosols in the particle sizes most important for climate applications are presented. We also analyse the annual, weekly and diurnal cycles of the aerosol number concentrations, provide log-normal fitting parameters for median number size distributions, and give guidance notes for data users. Emphasis is placed on the usability of results within the aerosol modelling community. We also show that the aerosol number concentrations of Aitken and accumulation mode particles (with 100 nm dry diameter as a cut-off between modes) are related, although there is significant variation in the ratios of the modal number concentrations. Different aerosol and station types are distinguished from this data and this methodology has potential for further categorization of stations aerosol number size distribution types. The European submicron aerosol was divided into characteristic types: Central European aerosol, characterized by single mode median size distributions, unimodal number concentration histograms and low variability in CCN-sized aerosol number concentrations; Nordic aerosol with low number concentrations, although showing pronounced seasonal variation of especially Aitken mode particles; Mountain sites (altitude over 1000 m a.s.l.) with a strong seasonal cycle in aerosol number concentrations, high variability, and very low median number concentrations. Southern and Western European regions had fewer stations, which decreases the regional coverage of these results. Aerosol number concentrations over the Britain and Ireland had very high variance and there are indications of mixed air masses from several source regions; the Mediterranean aerosol exhibit high seasonality, and a strong accumulation mode in the summer. The greatest concentrations were observed at the Ispra station in Northern Italy with high accumulation mode number concentrations in the winter. The aerosol number concentrations at the Arctic station Zeppelin in Ny-Ålesund in Svalbard have also a strong seasonal cycle, with greater concentrations of accumulation mode particles in winter, and dominating summer Aitken mode indicating more recently formed particles. Observed particles did not show any statistically significant regional work-week or weekday related variation in number concentrations studied. Analysis products are made for open-access to the research community, available in a freely accessible internet site. The results give to the modelling community a reliable, easy-to-use and freely available comparison dataset of aerosol size distributions.
Resumo:
This study was performed to characterize evidence of potential unconformity-type U mineralizing fluids in drill core fractures from the Stewardson Lake prospect, in the Athabasca Basin, located in Northern Saskatchewan and Alberta, Canada. Fractures were visually classified into eight varieties. This classification scheme was improved with the use of mineralogical characterization through SEM (Scanning Electron Microscope) and XRD analyses of the fracture fills and resulted in the identification of various oxides, hydroxides, sulfides, and clays or clay-sized minerals. Fractures were tallied to a total of ten categories with some commonalities in color. The oxidative, reductive or mixed nature of the fluids interacting with each fracture was determined based on its fill mineralogy. The measured Pb isotopic signature of samples was used to distinguish fractures affected solely by fluids emanating from a U mineralization source, from those affected by mixed fluids. Anomalies in U and U-pathfinder elements detected in fractures assisted with attributing them to the secondary dispersion halo of potential mineralization. Three types of fracture functions (chimney, composite and drain) were defined based on their interpreted flow vector and history. A secondary dispersion halo boundary with a zone of dominance of infiltrating fluids was suggested for two boreholes. The control of fill mineralogy on fracture color was investigated and the indicative and non-indicative colors and minerals, with respect to a secondary dispersion halo, were formally described. The fracture colors and fills indicative of proximity to the basement host of the potential mineralization were also identified. In addition, three zones of interest were delineated in the boreholes with respect to their geochemical dynamics and their relationship to the potential mineralization: a shallow barren overburden zone, a dispersion and alteration zone at intermediate depth, and a second deeper zone of dispersion and alteration.
Resumo:
Canadian young people are increasingly more connected through technological devices. This computer-mediated communication (CMC) can result in heightened connection and social support but can also lead to inadequate personal and physical connections. As technology evolves, its influence on health and well-being is important to investigate, especially among youth. This study aims to investigate the potential influences of computer-mediated communication (CMC) on the health of Canadian youth, using both quantitative and qualitative research approaches. This mixed-methods study utilized data from the 2013-2014 Health Behaviour in School-aged Children survey for Canada (n=30,117) and focus group data involving Ontario youth (7 groups involving 40 youth). In the quantitative component, a random-effects multilevel Poisson regression was employed to identify the effects of CMC on loneliness, stratified to explore interaction with family communication quality. A qualitative, inductive content analysis was applied to the focus group transcripts using a grounded theory inspired methodology. Through open line-by-line coding followed by axial coding, main categories and themes were identified. The quality of family communication modified the association between CMC use and loneliness. Among youth experiencing the highest quartile of family communication, daily use of verbal and social media CMC was significantly associated with reports of loneliness. The qualitative analysis revealed two overarching concepts that: (1) the health impacts of CMC are multidimensional and (2) there exists a duality of both positive and negative influences of CMC on health. Four themes were identified within this framework: (1) physical activity, (2) mental and emotional disturbance, (3) mindfulness, and (4) relationships. Overall, there is a high proportion of loneliness among Canadian youth, but this is not uniform for all. The associations between CMC and health are influenced by external and contextual factors, including family communication quality. Further, the technologically rich world in which young people live has a diverse impact on their health. For youth, their relationships with others and the context of CMC use shape overall influences on their health.
Resumo:
Introduction: Cancer is a leading cause of death worldwide. Nutrition may affect occurrence, recurrence and survival rates and many cancer patients and survivors seek individualized nutrition advice. Appropriately skilled nutritional therapy (NT) practitioners may be well-placed to safely provide this advice, but little is known of their perspectives on working with people affected by cancer. This mixed-methods study seeks to explore their views on training, barriers to practice, use of evidence, and other resources, to support the development of safe evidence-based practice. Preliminary data on barriers to practice are reported here. Methods: Two cohorts of NT practitioners were recruited from all UK registered NT practitioners, by an on-line anonymous survey. 84 cancer practitioners (CP) and 165 non-cancer practitioners (NCP) were recruited. Mixed quantitative and qualitative data was collected by the survey. Content analysis was used to analyze qualitative data on the use of evidence, barriers to practice and perceived needs for working with clients with cancer, for further exploration using interviews and focus groups. Preliminary results: For the NCP cohort, exploring themes of perceived barriers to working with people affected by cancer suggested that perceived complexity, risk and need for caution in this area of practice were important barriers. Insufficient specialist knowledge and skills also emerged as barriers. Some NCPs perceived opposition from medical practitioners and other mainstream healthcare professions as an obstacle to starting cancer practice. To overcome these barriers, specialist training emerged as most important. For the CP cohort, in exploring the skills they considered enabled them to undertake cancer work, specialist clinical and technical knowledge emerged strongly. Only 10% CP participants did not want more work with people affected by cancer. 10% CPs reported some NHS referrals, whereas most received clients by self-referral or from other practitioners. When considering barriers that impede their cancer practice, the dominant categories for CPs were hostility or opposition by mainstream oncology professionals, and lack of dialogue and engagement with them. To overcome these barriers, CPs desired engagement with oncology professionals and recognized specialist cancer NT training. For both NCPs and CPs, evidence resources, practice guidelines and practitioner support networks also emerged as potential enablers to cancer practice. Conclusions: This is the first detailed exploration of NT practitioners’ perceived barriers to working with people affected by cancer. Acquiring specialist skills and knowledge appears important to enable NCPs to start cancer work, and for CPs with these skills, the perceived barriers appear foremost in the relationship with mainstream cancer professionals. Further exploration of these themes, and other NT practitioner perspectives on working with people affected by cancer, is underway. This work will inform and support the development of professional practice, training and other resources.
Resumo:
With the development of information technology, the theory and methodology of complex network has been introduced to the language research, which transforms the system of language in a complex networks composed of nodes and edges for the quantitative analysis about the language structure. The development of dependency grammar provides theoretical support for the construction of a treebank corpus, making possible a statistic analysis of complex networks. This paper introduces the theory and methodology of the complex network and builds dependency syntactic networks based on the treebank of speeches from the EEE-4 oral test. According to the analysis of the overall characteristics of the networks, including the number of edges, the number of the nodes, the average degree, the average path length, the network centrality and the degree distribution, it aims to find in the networks potential difference and similarity between various grades of speaking performance. Through clustering analysis, this research intends to prove the network parameters’ discriminating feature and provide potential reference for scoring speaking performance.
Resumo:
BACKGROUND: Considering the high rates of pain as well as its under-management in long-term care (LTC) settings, research is needed to explore innovations in pain management that take into account limited resource realities. It has been suggested that nurse practitioners, working within an inter-professional model, could potentially address the under-management of pain in LTC.
OBJECTIVES: This study evaluated the effectiveness of implementing a nurse practitioner-led, inter-professional pain management team in LTC in improving (a) pain-related resident outcomes; (b) clinical practice behaviours (e.g., documentation of pain assessments, use of non-pharmacological and pharmacological interventions); and, (c) quality of pain medication prescribing practices.
METHODS: A mixed method design was used to evaluate a nurse practitioner-led pain management team, including both a quantitative and qualitative component. Using a controlled before-after study, six LTC homes were allocated to one of three groups: 1) a nurse practitioner-led pain team (full intervention); 2) nurse practitioner but no pain management team (partial intervention); or, 3) no nurse practitioner, no pain management team (control group). In total, 345 LTC residents were recruited to participate in the study; 139 residents for the full intervention group, 108 for the partial intervention group, and 98 residents for the control group. Data was collected in Canada from 2010 to 2012.
RESULTS: Implementing a nurse practitioner-led pain team in LTC significantly reduced residents' pain and improved functional status compared to usual care without access to a nurse practitioner. Positive changes in clinical practice behaviours (e.g., assessing pain, developing care plans related to pain management, documenting effectiveness of pain interventions) occurred over the intervention period for both the nurse practitioner-led pain team and nurse practitioner-only groups; these changes did not occur to the same extent, if at all, in the control group. Qualitative analysis highlighted the perceived benefits of LTC staff about having access to a nurse practitioner and benefits of the pain team, along with barriers to managing pain in LTC.
CONCLUSIONS: The findings from this study showed that implementing a nurse practitioner-led pain team can significantly improve resident pain and functional status as well as clinical practice behaviours of LTC staff. LTC homes should employ a nurse practitioner, ideally located onsite as opposed to an offsite consultative role, to enhance inter-professional collaboration and facilitate more consistent and timely access to pain management.
Resumo:
A new variant of the Element-Free Galerkin (EFG) method, that combines the diffraction method, to characterize the crack tip solution, and the Heaviside enrichment function for representing discontinuity due to a crack, has been used to model crack propagation through non-homogenous materials. In the case of interface crack propagation, the kink angle is predicted by applying the maximum tangential principal stress (MTPS) criterion in conjunction with consideration of the energy release rate (ERR). The MTPS criterion is applied to the crack tip stress field described by both the stress intensity factor (SIF) and the T-stress, which are extracted using the interaction integral method. The proposed EFG method has been developed and applied for 2D case studies involving a crack in an orthotropic material, crack along an interface and a crack terminating at a bi-material interface, under mechanical or thermal loading; this is done to demonstrate the advantages and efficiency of the proposed methodology. The computed SIFs, T-stress and the predicted interface crack kink angles are compared with existing results in the literature and are found to be in good agreement. An example of crack growth through a particle-reinforced composite materials, which may involve crack meandering around the particle, is reported.
Resumo:
Data from the World Federation of Exchanges show that Brazil’s Sao Paulo stock exchange is one of the largest worldwide in terms of market value. Thus, the objective of this study is to obtain univariate and bivariate forecasting models based on intraday data from the futures and spot markets of the BOVESPA index. The interest is to verify if there exist arbitrage opportunities in Brazilian financial market. To this end, three econometric forecasting models were built: ARFIMA, vector autoregressive (VAR), and vector error correction (VEC). Furthermore, it presents the results of a Granger causality test for the aforementioned series. This type of study shows that it is important to identify arbitrage opportunities in financial markets and, in particular, in the application of these models on data of this nature. In terms of the forecasts made with these models, VEC showed better results. The causality test shows that futures BOVESPA index Granger causes spot BOVESPA index. This result may indicate arbitrage opportunities in Brazil.
Resumo:
Este estudio presenta la validación de las observaciones que realizó el programa de observación pesquera llamado Programa Bitácoras de Pesca (PBP) durante el periodo 2005 - 2011 en el área de distribución donde operan las embarcaciones industriales de cerco dedicadas a la pesca del stock norte-centro de la anchoveta peruana (Engraulis ringens). Además, durante ese mismo periodo y área de distribución, se estimó la magnitud del descarte por exceso de captura, descarte de juveniles y la captura incidental de dicha pesquera. Se observaron 3 768 viajes de un total de 302 859, representando un porcentaje de 1.2 %. Los datos del descarte por exceso de captura, descarte de juveniles y captura incidental registrados en los viajes observados, se caracterizaron por presentar un alta proporción de ceros. Para la validación de las observaciones, se realizó un estudio de simulación basado en la metodología de Monte Carlo usando un modelo de distribución binomial negativo. Esta permite inferir sobre el nivel de cobertura óptima y conocer si la información obtenida en el programa de observación es contable. De este análisis, se concluye que los niveles de observación actual se deberían incrementar hasta tener un nivel de cobertura de al menos el 10% del total de viajes que realicen en el año las embarcaciones industriales de cerco dedicadas a la pesca del stock norte-centro de la anchoveta peruana. La estimación del descarte por exceso de captura, descarte de juveniles y captura incidental se realizó mediante tres metodologías: Bootstrap, Modelo General Lineal (GLM) y Modelo Delta. Cada metodología estimó distintas magnitudes con tendencias similares. Las magnitudes estimadas fueron comparadas usando un ANOVA Bayesiano, la cual muestra que hubo escasa evidencia que las magnitudes estimadas del descarte por exceso de captura por metodología sean diferentes, lo mismo se presentó para el caso de la captura incidental, mientras que para el descarte de juveniles mostró que hubieron diferencias sustanciales de ser diferentes. La metodología que cumplió los supuestos y explico la mayor variabilidad de las variables modeladas fue el Modelo Delta, el cual parece ser una mejor alternativa para la estimación, debido a la alta proporción de ceros en los datos. Las estimaciones promedio del descarte por exceso de captura, descarte de juveniles y captura incidental aplicando el Modelo Delta, fueron 252 580, 41 772, 44 823 toneladas respectivamente, que en conjunto representaron el 5.74% de los desembarques. Además, con la magnitud de la estimación del descarte de juveniles, se realizó un ejercicio de proyección de biomasa bajo el escenario hipotético de no mortalidad por pesca y que los individuos juveniles descartados sólo presentaron tallas de 8 y 11 cm., en la cual se obtuvo que la biomasa que no estará disponible a la pesca está entre los 52 mil y 93 mil toneladas.
Resumo:
The use of the Design by Analysis (DBA) route is a modern trend in pressure vessel and piping international codes in mechanical engineering. However, to apply the DBA to structures under variable mechanical and thermal loads, it is necessary to assure that the plastic collapse modes, alternate plasticity and incremental collapse (with instantaneous plastic collapse as a particular case), be precluded. The tool available to achieve this target is the shakedown theory. Unfortunately, the practical numerical applications of the shakedown theory result in very large nonlinear optimization problems with nonlinear constraints. Precise, robust and efficient algorithms and finite elements to solve this problem in finite dimension has been a more recent achievements. However, to solve real problems in an industrial level, it is necessary also to consider more realistic material properties as well as to accomplish 3D analysis. Limited kinematic hardening, is a typical property of the usual steels and it should be considered in realistic applications. In this paper, a new finite element with internal thermodynamical variables to model kinematic hardening materials is developed and tested. This element is a mixed ten nodes tetrahedron and through an appropriate change of variables is possible to embed it in a shakedown analysis software developed by Zouain and co-workers for elastic ideally-plastic materials, and then use it to perform 3D shakedown analysis in cases with limited kinematic hardening materials
Resumo:
The use of the Design by Analysis concept is a trend in modern pressure vessel and piping calculations. DBA flexibility allow us to deal with unexpected configurations detected at in-service inspections. It is also important, in life extension calculations, when deviations of the original standard hypotesis adopted initially in Design by Formula, can happen. To apply the DBA to structures under variable mechanic and thermal loads, it is necessary that, alternate plasticity and incremental collapse (with instantaneous plastic collapse as a particular case), be precluded. These are two basic failure modes considered by ASME or European Standards in DBA. The shakedown theory is the tool available to achieve this goal. In order to apply it, is necessary only the range of the variable loads and the material properties. Precise, robust and efficient algorithms to solve the very large nonlinear optimization problems generated in numerical applications of the shakedown theory is a recent achievement. Zouain and co-workers developed one of these algorithms for elastic ideally-plastic materials. But, it is necessary to consider more realistic material properties in real practical applications. This paper shows an enhancement of this algorithm to dealing with limited kinematic hardening, a typical property of the usual steels. This is done using internal thermodynamic variables. A discrete algorithm is obtained using a plane stress, mixed finite element, with internal variable. An example, a beam encased in an end, under constant axial force and variable moment is presented to show the importance of considering the limited kinematic hardening in a shakedown analysis.
Resumo:
In design or safety assessment of mechanical structures, the use of the Design by Analysis (DBA) route is a modern trend. However, for making possible to apply DBA to structures under variable loads, two basic failure modes considered by ASME or European Standards must be precluded. Those modes are the alternate plasticity and incremental collapse (with instantaneous plastic collapse as a particular case). Shakedown theory is a tool that permit us to assure that those kinds of failures will be avoided. However, in practical applications, very large nonlinear optimization problems are generated. Due to this facts, only in recent years have been possible to obtain algorithms sufficiently accurate, robust and efficient, for dealing with this class of problems. In this paper, one of these shakedown algorithms, developed for dealing with elastic ideally-plastic structures, is enhanced to include limited kinematic hardening, a more realistic material behavior. This is done in the continuous model by using internal thermodynamic variables. A corresponding discrete model is obtained using an axisymmetric mixed finite element with an internal variable. A thick wall sphere, under variable thermal and pressure loads, is used in an example to show the importance of considering the limited kinematic hardening in the shakedown calculations
Resumo:
La Banque mondiale propose la bonne gouvernance comme la stratégie visant à corriger les maux de la mauvaise gouvernance et de faciliter le développement dans les pays en développement (Carayannis, Pirzadeh, Popescu & 2012; & Hilyard Wilks 1998; Leftwich 1993; Banque mondiale, 1989). Dans cette perspective, la réforme institutionnelle et une arène de la politique publique plus inclusive sont deux stratégies critiques qui visent à établir la bonne gouvernance, selon la Banque et d’autres institutions de Bretton Woods. Le problème, c’est que beaucoup de ces pays en voie de développement ne possèdent pas l’architecture institutionnelle préalable à ces nouvelles mesures. Cette thèse étudie et explique comment un état en voie de développement, le Commonwealth de la Dominique, s’est lancé dans un projet de loi visant l’intégrité dans la fonction publique. Cette loi, la Loi sur l’intégrité dans la fonction publique (IPO) a été adoptée en 2003 et mis en œuvre en 2008. Cette thèse analyse les relations de pouvoir entre les acteurs dominants autour de évolution de la loi et donc, elle emploie une combinaison de technique de l’analyse des réseaux sociaux et de la recherche qualitative pour répondre à la question principale: Pourquoi l’État a-t-il développé et mis en œuvre la conception actuelle de la IPO (2003)? Cette question est d’autant plus significative quand nous considérons que contrairement à la recherche existante sur le sujet, l’IPO dominiquaise diverge considérablement dans la structure du l’IPO type idéal. Nous affirmons que les acteurs "rationnels," conscients de leur position structurelle dans un réseau d’acteurs, ont utilisé leurs ressources de pouvoir pour façonner l’institution afin qu’elle serve leurs intérêts et ceux et leurs alliés. De plus, nous émettons l’hypothèse que: d’abord, le choix d’une agence spécialisée contre la corruption et la conception ultérieure de cette institution reflètent les préférences des acteurs dominants qui ont participé à la création de ladite institution et la seconde, notre hypothèse rivale, les caractéristiques des modèles alternatifs d’institutions de l’intégrité publique sont celles des acteurs non dominants. Nos résultats sont mitigés. Le jeu de pouvoir a été limité à un petit groupe d’acteurs dominants qui ont cherché à utiliser la création de la loi pour assurer leur légitimité et la survie politique. Sans surprise, aucun acteur n’a avancé un modèle alternatif. Nous avons conclu donc que la loi est la conséquence d’un jeu de pouvoir partisan. Cette recherche répond à la pénurie de recherche sur la conception des institutions de l’intégrité publique, qui semblent privilégier en grande partie un biais organisationnel et structurel. De plus, en étudiant le sujet du point de vue des relations de pouvoir (le pouvoir, lui-même, vu sous l’angle actanciel et structurel), la thèse apporte de la rigueur conceptuelle, méthodologique, et analytique au discours sur la création de ces institutions par l’étude de leur genèse des perspectives tant actancielles que structurelles. En outre, les résultats renforcent notre capacité de prédire quand et avec quelle intensité un acteur déploierait ses ressources de pouvoir.