922 resultados para expense


Relevância:

10.00% 10.00%

Publicador:

Resumo:

La pintura de paisaje surge como corriente pictórica a finales del siglo XIX como un producto de una suma de intereses tanto académicos como científicos que desembocan en el interés por la naturaleza. Se enmarca dentro de un pensamiento político que sitúa a nuestro país en las expectativas de una nueva estructura socio cultural que pone énfasis en la libertad y en los derechos humanos, el derecho a la propiedad privada y sobre todo abre sus horizontes a la integración social y cultural, se ve la necesidad de comunicar e inspirarse en la propia tierra. En ésta investigación se pretende inquirir en los diferentes procesos que experimentan los artistas al contacto con la naturaleza; que se interiorizan a través de las distintas experiencias que tienen con las técnicas de proceso artístico, dentro de las cuales se capturan la luz, el espacio, la cromática, la vivencia que capta el espectador de las obras y las expectativas que tiene del artista. Este proceso puede o no ser artístico: algunas veces guiado por el academicismo, otras veces por encargo, con expectativas que muchas veces tienen fines políticos (poniendo en desmedro del valor académico, el tema). Obras que se traducen en una exigencia de la técnica, ya que se trata de captar la naturaleza, que de por sí es perfecta, en un prolijo uso casi perfeccionista de la misma, sin llegar a comprender que los individuos como las huellas dactilares, son diferentes en su interior; por lo que captarán la esencia de la naturaleza de acuerdo a las numerosas experiencias que hayan tenido con ella. La idea de pintar paisaje ya no está relacionada solamente con la observación sino con la perspectiva de representar el entorno de acuerdo a la manera muy particular y especial de cada artista.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Civil e Ambiental, Programa de Pós-Graduação em Tecnologia Ambiental e Recursos Hídricos, 2015.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background Many acute stroke trials have given neutral results. Sub-optimal statistical analyses may be failing to detect efficacy. Methods which take account of the ordinal nature of functional outcome data are more efficient. We compare sample size calculations for dichotomous and ordinal outcomes for use in stroke trials. Methods Data from stroke trials studying the effects of interventions known to positively or negatively alter functional outcome – Rankin Scale and Barthel Index – were assessed. Sample size was calculated using comparisons of proportions, means, medians (according to Payne), and ordinal data (according to Whitehead). The sample sizes gained from each method were compared using Friedman 2 way ANOVA. Results Fifty-five comparisons (54 173 patients) of active vs. control treatment were assessed. Estimated sample sizes differed significantly depending on the method of calculation (Po00001). The ordering of the methods showed that the ordinal method of Whitehead and comparison of means produced significantly lower sample sizes than the other methods. The ordinal data method on average reduced sample size by 28% (inter-quartile range 14–53%) compared with the comparison of proportions; however, a 22% increase in sample size was seen with the ordinal method for trials assessing thrombolysis. The comparison of medians method of Payne gave the largest sample sizes. Conclusions Choosing an ordinal rather than binary method of analysis allows most trials to be, on average, smaller by approximately 28% for a given statistical power. Smaller trial sample sizes may help by reducing time to completion, complexity, and financial expense. However, ordinal methods may not be optimal for interventions which both improve functional outcome

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Les manifestations de crise, en Côte d'Ivoire, ont été extrêmement violentes. Au cours des quinze dernières années, plus de 400 personnes sont mortes, tuées dans des affrontements avec les forces de sécurités ou des contre-manifestants. Malgré la gravité du problème, peu d’études scientifiques y sont consacrées et les rares analyses et enquêtes existantes portent, de façon unilatérale, sur l’identité et la responsabilité pénale des auteurs et commanditaires putatifs de cette violence. La présente étude s’élève contre le moralisme inhérent à ces approches pour aborder la question sous l’angle de l’interaction : cette thèse a pour objectif de comprendre les processus et logiques qui sous-tendent l’usage de la violence au cours des manifestations. Le cadre théorique utilisé dans cette étude qualitative est l’interactionnisme symbolique. Le matériel d’analyse est composé d’entrevues et de divers documents. Trente-trois (33) entrevues semi-dirigées ont été réalisées avec des policiers et des manifestants, cooptés selon la technique de la boule de neige, entre le 3 janvier et le 15 mai 2013, à Abidjan. Les rapports d’enquête, de l’ONG Human Rights Watch, sur les manifestations de crise, les manuels de formation de la police et divers autres matériaux périphériques ont également été consultés. Les données ont été analysées suivant les principes et techniques de la théorisation ancrée (Paillée, 1994). Trois principaux résultats ont été obtenus. Premièrement, le système ivoirien de maintien de l'ordre est conçu selon le modèle d’une « police du prince ». Les forces de sécurité dans leur ensemble y occupent une fonction subalterne d’exécutant. Elles sont placées sous autorité politique avec pour mandat la défense inconditionnelle des institutions. Le style standard de gestion des foules, qui en découle, est légaliste et répressif, correspondant au style d’escalade de la force (McPhail, Schweingruber, & Carthy, 1998). Cette « police du prince » dispose toutefois de marges de manœuvre sur le terrain, qui lui permettent de moduler son style en fonction de la conception qu’elle se fait de l’attitude des manifestants : paternaliste avec les foules dites calmes, elle devient répressive ou déviante avec les foules qu’elle définit comme étant hostiles. Deuxièmement, à rebours d’une conception victimaire de la foule, la violence est une transaction situationnelle dynamique entre forces de sécurité et manifestants. La violence suit un processus ascendant dont les séquences et les règles d’enchainement sont décrites. Ainsi, le premier niveau auquel s’arrête la majorité des manifestations est celui d’une force non létale bilatérale dans lequel les deux acteurs, protestataires et policiers, ont recours à des armes non incapacitantes, où les cailloux des premiers répondent au gaz lacrymogène des seconds. Le deuxième niveau correspond à la létalité unilatérale : la police ouvre le feu lorsque les manifestants se rapprochent de trop près. Le troisième et dernier niveau est atteint lorsque les manifestants utilisent à leur tour des armes à feu, la létalité est alors bilatérale. Troisièmement, enfin, le concept de « l’indignité républicaine » rend compte de la logique de la violence dans les manifestations. La violence se déclenche et s’intensifie lorsqu’une des parties, manifestants ou policiers, interprète l’acte posé par l’adversaire comme étant en rupture avec le rôle attendu du statut qu’il revendique dans la manifestation. Cet acte jugé indigne a pour conséquence de le priver de la déférence rattachée à son statut et de justifier à son encontre l’usage de la force. Ces actes d’indignités, du point de vue des policiers, sont symbolisés par la figure du manifestant hostile. Pour les manifestants, l’indignité des forces de sécurité se reconnait par des actes qui les assimilent à une milice privée. Le degré d’indignité perçu de l’acte explique le niveau d’allocation de la violence.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We review our work on generalisations of the Becker-Doring model of cluster-formation as applied to nucleation theory, polymer growth kinetics, and the formation of upramolecular structures in colloidal chemistry. One valuable tool in analysing mathematical models of these systems has been the coarse-graining approximation which enables macroscopic models for observable quantities to be derived from microscopic ones. This permits assumptions about the detailed molecular mechanisms to be tested, and their influence on the large-scale kinetics of surfactant self-assembly to be elucidated. We also summarise our more recent results on Becker-Doring systems, notably demonstrating that cross-inhibition and autocatalysis can destabilise a uniform solution and lead to a competitive environment in which some species flourish at the expense of others, phenomena relevant in models of the origins of life.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Au cours du siècle dernier, des améliorations au niveau des conditions de vie ainsi que des avancées importantes dans les sciences biomédicales ont permis de repousser les frontières de la vie. Jusqu’au début du XXe Siècle, la mort était un processus relativement bref, survenant à la suite de maladies infectieuses et avait lieu à la maison. À présent, elle survient plutôt après une longue bataille contre des maladies incurables et des afflictions diverses liées à la vieillesse et a le plus souvent lieu à l’hôpital. Pour comprendre la souffrance du malade d’aujourd’hui et l’aborder, il faut comprendre ce qu’engendre comme ressenti ce nouveau contexte de fin de vie autant pour le patient que pour le clinicien qui en prend soin. Cette thèse se veut ainsi une étude exploratoire et critique des enjeux psychologiques relatifs à cette mort contemporaine avec un intérêt premier pour l’optimisation du soulagement de la souffrance existentielle du patient dans ce contexte. D’abord, je m’intéresserai à la souffrance du patient. À travers un examen critique des écrits, une définition précise et opérationnelle, comportant des critères distinctifs, de ce qu’est la souffrance existentielle en fin de vie sera proposée. Je poserai ainsi l’hypothèse que la souffrance peut être définie comme une forme de construction de l’esprit s’articulant autour de trois concepts : intégrité, altérité et temporalité. D’abord, intégrité au sens où initialement l’individu malade se sent menacé dans sa personne (relation à soi). Ensuite, altérité au sens où la perception de ses conditions extérieures a un impact sur la détresse ressentie (relation à l’Autre). Et finalement, temporalité au sens où l’individu souffrant de façon existentielle semble bien souvent piégé dans un espace-temps particulier (relation au temps). Ensuite, je m’intéresserai à la souffrance du soignant. Dans le contexte d’une condition terminale, il arrive que des interventions lourdes (p. ex. : sédation palliative profonde, interventions invasives) soient discutées et même proposées par un soignant. Je ferai ressortir diverses sources de souffrance propres au soignant et générées par son contact avec le patient (exemples de sources de souffrance : idéal malmené, valeurs personnelles, sentiment d’impuissance, réactions de transfert et de contre-transfert, identification au patient, angoisse de mort). Ensuite, je mettrai en lumière comment ces dites sources de souffrance peuvent constituer des barrières à l’approche de la souffrance du patient, notamment par l’influence possible sur l’approche thérapeutique choisie. On constatera ainsi que la souffrance d’un soignant contribue par moment à mettre en place des mesures visant davantage à l’apaiser lui-même au détriment de son patient. En dernier lieu, j'élaborerai sur la façon dont la rencontre entre un soignant et un patient peut devenir un espace privilégié afin d'aborder la souffrance. J'émettrai certaines suggestions afin d'améliorer les soins de fin de vie par un accompagnement parvenant à mettre la technologie médicale au service de la compassion tout en maintenant la singularité de l'expérience du patient. Pour le soignant, ceci nécessitera une amélioration de sa formation, une prise de conscience de ses propres souffrances et une compréhension de ses limites à soulager l'Autre.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cette thèse analyse les pratiques d’égalité entre les femmes et les hommes au sein des organisations non gouvernementales (ONG) maliennes qui ont reçu du financement canadien. En effet, l’aide publique au développement a subi des transformations majeures depuis les années 1950. L’une de ces transformations a été le rôle important joué par les ONG dans les années 1990, à la suite de l’adoption des politiques d’ajustements structurels et de la fin de la monopolisation par l’État en ce qui a trait à l’aide publique pour les projets de développement. Entre autres, les ONG ont été sollicitées pour promouvoir les politiques d’égalité entre hommes et femmes. L’importance des ONG dans l’aide publique au développement a créé des relations de dépendances vis-à-vis des bailleurs de fonds qui imposent des conditionnalités. Nos résultats ont montré que les bailleurs de fonds exigent l’égalité entre les sexes en ce qui concerne les bénéficiaires des programmes, mais, paradoxalement, ne l’exigent pas à l’intérieur des ONG et dans leurs ressources humaines. En analysant la composition du personnel de huit ONG maliennes, nos résultats montrent que 34 % du personnel sont des femmes alors que 66 % sont des hommes, ce qui démontre un déséquilibre assez important en matière de parité. Cependant, une analyse plus fine nous indique que les pratiques d’égalité entre femmes et hommes dans les structures et dans la gestion des ressources humaines diffèrent d’une ONG à une autre. Ces pratiques dépendent beaucoup de la volonté et des valeurs des gestionnaires. Notre recherche a fait ressortir plusieurs explications de cette disparité au niveau de l’emploi des femmes. Les raisons souvent mentionnées étaient : 1) la nécessité d’avoir du personnel compétent, 2) la conciliation famille-travail, 3) le contexte socioculturel, 4) l’interprétation de la religion musulmane en matière d’égalité. En effet, nos résultats ont démontré que sous l’influence des bailleurs de fonds les ONG se sont professionnalisées, que l’impact de la professionnalisation a été différent selon le genre et que celle-ci s’est faite aux dépens des femmes. Ainsi, certains gestionnaires, quel que soit leur sexe, préfèrent recruter plus d’hommes parce qu’ils les jugent plus compétents.Nos résultats confirment la théorie du plafond de verre qui met en exergue le fait que les femmes ont de la difficulté à accéder à des postes de responsabilité. Nos résultats ont aussi démontré qu’au Mali, le contexte socioculturel et la religion jouent un grand rôle dans les relations sociales et surtout en ce qui concerne la place des femmes au sein de la société.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

New technologies appear each moment and its use can result in countless benefits for that they directly use and for all the society as well. In this direction, the State also can use the technologies of the information and communication to improve the level of rendering of services to the citizens, to give more quality of life to the society and to optimize the public expense, centering it in the main necessities. For this, it has many research on politics of Electronic Government (e-Gov) and its main effect for the citizen and the society as a whole. This research studies the concept of Electronic Government and wishes to understand the process of implementation of Free Softwares in the agencies of the Direct Administration in the Rio Grande do Norte. Moreover, it deepens the analysis to identify if its implantation results in reduction of cost for the state treasury and intends to identify the Free Software participation in the Administration and the bases of the politics of Electronic Government in this State. Through qualitative interviews with technologies coordinators and managers in 3 State Secretaries it could be raised the ways that come being trod for the Government in order to endow the State with technological capacity. It was perceived that the Rio Grande do Norte still is an immature State in relation to practical of electronic government (e-Gov) and with Free Softwares, where few agencies have factual and viable initiatives in this area. It still lacks of a strategical definition of the paper of Technology and more investments in infrastructure of staff and equipment. One also observed advances as the creation of the normative agency, the CETIC (State Advice of Technology of the Information and Communication), the Managing Plan of Technology that provide a necessary diagnosis with the situation how much Technology in the State and considered diverse goals for the area, the accomplishment of a course of after-graduation for managers of Technology and the training in BrOffice (OppenOffice) for 1120 public servers

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite the organizational benefits of treating employees fairly, both anecdotal and empirical evidence suggest that managers do not behave fairly towards their employees in a consistent manner. As treating employees fairly takes up personal resources such as time, effort, and attention, I argue that when managers face high workloads (i.e., high amounts of work and time pressure), they are unable to devote such personal resources to effectively meet both core technical task requirements and treat employees fairly. I propose that in general, managers tend to view their core technical task performance as more important than being fair in their dealings with employees; as a result, when faced with high workloads, they tend to prioritize the former at the expense of the latter. I also propose that managerial fairness will suffer more as a result of heightened workloads than will core technical task performance, unless managers perceive their organization to explicitly reward fair treatment of employees. I find support for my hypotheses across three studies: two experimental studies (with online participants and students respectively) and one field study of managers from a variety of organizations. I discuss the implications of studying fairness in the wider context of managers’ complex role in organizations to the fairness and managerial work demands literatures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: The ageing population, with concomitant increase in chronic conditions, is increasing the presence of older people with complex needs in hospital. People with dementia are one of these complex populations and are particularly vulnerable to complications in hospital. Registered nurses can offer simultaneous assessment and intervention to prevent or mitigate hospital-acquired complications through their skilled brokerage between patient needs and hospital functions. A range of patient outcome measures that are sensitive to nursing care has been tested in nursing work environments across the world. However, none of these measures have focused on hospitalised older patients. Method: This thesis explores nursing-sensitive complications for older patients with and without dementia using an internationally recognised, risk-adjusted patient outcome approach. Specifically explored are: the differences between rates of complications; the costs of complications; and cost comparisons of patient complexity. A retrospective cohort study of an Australian state’s 2006–07 public hospital discharge data was utilised to identify patient episodes for people over age 50 (N=222,440) where dementia was identified as a primary or secondary diagnosis (N=44,422). Extra costs for patient episodes were estimated based on length of stay (LOS) above the average for each patient’s Diagnosis Related Group (DRG) (N=157,178) and were modelled using linear regression analysis to establish the strongest patient complexity predictors of cost. Results: Hospitalised patients with a primary or secondary diagnosis of dementia had higher rates of complications than did their same-age peers. The highest rates and relative risk for people with dementia were found in four key complications: urinary tract infections; pressure injuries; pneumonia, and delirium. While 21.9% of dementia patients (9,751/44,488, p<0.0001) suffered a complication, only 8.8% of non-dementia patients did so (33,501/381,788, p<0.0001), giving dementia patients a 2.5 relative risk of acquiring a complication (p<0.0001). These four key complications in patients over 50 both with and without dementia were associated with an eightfold increase in length of stay (813%, or 3.6 days/0.4 days) and double the increased estimated mean episode cost (199%, or A$16,403/ A$8,240). These four complications were associated with 24.7% of the estimated cost of additional days spent in hospital in 2006–07 in NSW (A$226million/A$914million). Dementia patients accounted for 22.0% of these costs (A$49million/A$226million) even though they were only 10.4% of the population (44,488/426,276 episodes). Hospital-acquired complications, particularly for people with a comorbidity of dementia, cost more than other kinds of inpatient complexity but admission severity was a better predictor of excess cost. Discussion: Four key complications occur more often in older patients with dementia and the high rate of these complications makes them expensive. These complications are potentially preventable. However, the care that can prevent them (such as mobility, hydration, nutrition and communication) is known to be rationed or left unfinished by nurses. Older hospitalised people who have complex needs, such as those with dementia, are more likely to experience care rationing as their care tends to take longer, be less predictable and less curative in nature. This thesis offers the theoretical proposition that evidence-based nursing practices are rationed for complex older patients and that this rationed care contributes to functional and cognitive decline during hospitalisation. This, in turn, contributes to the high rates of complications observed. Thus four key complications can be seen as a ‘Failure to Maintain’ complex older people in hospital. ‘Failure to Maintain’ is the inadequate delivery of essential functional and cognitive care for a complex older person in hospital resulting in a complication, and is recommended as a useful indicator for hospital quality. Conclusions: When examining extra length of stay in hospital, complications and comorbid dementia are costly. Complications are potentially preventable, and dementia care in hospitals can be improved. Hospitals and governments looking to decrease costs can engage in risk-reduction strategies for common nurse sensitive complications such as healthy nursing work environments that minimise nurses’ rationing of functional and cognitive care. The conceptualisation of complex older patients as ‘business as usual’ rather than a ‘burden’ is likely necessary for sustainable health care services of the future. The use of the ‘Failure to Maintain’ indicators at institution and state levels may aid in embedding this approach for complex older patients into health organisations. Ongoing investigation is warranted into the relationships between the largest health services expense (hospitals), the largest hospital population (complex older patients), and the largest hospital expense (nurses). The ‘Failure to Maintain’ quality indicator makes a useful and substantive contribution to further clinical, administrative and research developments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Proportion responding (PR) is the preference for proportionally higher gains, such that the same absolute quantity is valued more as the reference group decreases. This research investigated this kind of proportion PR in decisions about saving lives (e.g., saving 10/10 lives is preferred to saving 10/100 lives). The results of two studies suggest that PR does not stem from an overall tendency to choose higher proportions, but rather from faulty deliberative reasoning. In particular, people who display PR are less likely to engage in deliberative reflection as measured by response time, the Process Dissociation Procedure, the Cognitive Reflection Test, a numeracy test, and a task assessing denominator neglect. This association between faulty deliberation and PR was observed only when choosing the highest proportion was non-normative because it came at the expense of absolute gains (e.g., saving 10/10 lives is preferred to saving 11/100 lives). These results help to make sense of discrepant findings in previous research, pertaining to how PR relates to biased reasoning and decision making.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The neural crest is a group of migratory, multipotent stem cells that play a crucial role in many aspects of embryonic development. This uniquely vertebrate cell population forms within the dorsal neural tube but then emigrates out and migrates long distances to different regions of the body. These cells contribute to formation of many structures such as the peripheral nervous system, craniofacial skeleton, and pigmentation of the skin. Why some neural tube cells undergo a change from neural to neural crest cell fate is unknown as is the timing of both onset and cessation of their emigration from the neural tube. In recent years, growing evidence supports an important role for epigenetic regulation as a new mechanism for controlling aspects of neural crest development. In this thesis, I dissect the roles of the de novo DNA methyltransferases (DNMTs) 3A and 3B in neural crest specification, migration and differentiation. First, I show that DNMT3A limits the spatial boundary between neural crest versus neural tube progenitors within the neuroepithelium. DNMT3A promotes neural crest specification by directly mediating repression of neural genes, like Sox2 and Sox3. Its knockdown causes ectopic Sox2 and Sox3 expression at the expense of neural crest territory. Thus, DNMT3A functions as a molecular switch, repressing neural to favor neural crest cell fate. Second, I find that DNMT3B restricts the temporal window during which the neural crest cells emigrate from the dorsal neural tube. Knockdown of DNMT3B causes an excess of neural crest emigration, by extending the time that the neural tube is competent to generate emigrating neural crest cells. In older embryos, this resulted in premature neuronal differentiation. Thus, DNMT3B regulates the duration of neural crest production by the neural tube and the timing of their differentiation. My results in avian embryos suggest that de novo DNA methylation, exerted by both DNMT3A and DNMT3B, plays a dual role in neural crest development, with each individual paralogue apparently functioning during a distinct temporal window. The results suggest that de novo DNA methylation is a critical epigenetic mark used for cell fate restriction of progenitor cells during neural crest cell fate specification. Our discovery provides important insights into the mechanisms that determine whether a cell becomes part of the central nervous system or peripheral cell lineages.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Biogeochemical-Argo is the extension of the Argo array of profiling floats to include floats that are equipped with biogeochemical sensors for pH, oxygen, nitrate, chlorophyll, suspended particles, and downwelling irradiance. Argo is a highly regarded, international program that measures the changing ocean temperature (heat content) and salinity with profiling floats distributed throughout the ocean. Newly developed sensors now allow profiling floats to also observe biogeochemical properties with sufficient accuracy for climate studies. This extension of Argo will enable an observing system that can determine the seasonal to decadal-scale variability in biological productivity, the supply of essential plant nutrients from deep-waters to the sunlit surface layer, ocean acidification, hypoxia, and ocean uptake of CO2. Biogeochemical-Argo will drive a transformative shift in our ability to observe and predict the effects of climate change on ocean metabolism, carbon uptake, and living marine resource management. Presently, vast areas of the open ocean are sampled only once per decade or less, with sampling occurring mainly in summer. Our ability to detect changes in biogeochemical processes that may occur due to the warming and acidification driven by increasing atmospheric CO2, as well as by natural climate variability, is greatly hindered by this undersampling. In close synergy with satellite systems (which are effective at detecting global patterns for a few biogeochemical parameters, but only very close to the sea surface and in the absence of clouds), a global array of biogeochemical sensors would revolutionize our understanding of ocean carbon uptake, productivity, and deoxygenation. The array would reveal the biological, chemical, and physical events that control these processes. Such a system would enable a new generation of global ocean prediction systems in support of carbon cycling, acidification, hypoxia and harmful algal blooms studies, as well as the management of living marine resources. In order to prepare for a global Biogeochemical-Argo array, several prototype profiling float arrays have been developed at the regional scale by various countries and are now operating. Examples include regional arrays in the Southern Ocean (SOCCOM ), the North Atlantic Sub-polar Gyre (remOcean ), the Mediterranean Sea (NAOS ), the Kuroshio region of the North Pacific (INBOX ), and the Indian Ocean (IOBioArgo ). For example, the SOCCOM program is deploying 200 profiling floats with biogeochemical sensors throughout the Southern Ocean, including areas covered seasonally with ice. The resulting data, which are publically available in real time, are being linked with computer models to better understand the role of the Southern Ocean in influencing CO2 uptake, biological productivity, and nutrient supply to distant regions of the world ocean. The success of these regional projects has motivated a planning meeting to discuss the requirements for and applications of a global-scale Biogeochemical-Argo program. The meeting was held 11-13 January 2016 in Villefranche-sur-Mer, France with attendees from eight nations now deploying Argo floats with biogeochemical sensors present to discuss this topic. In preparation, computer simulations and a variety of analyses were conducted to assess the resources required for the transition to a global-scale array. Based on these analyses and simulations, it was concluded that an array of about 1000 biogeochemical profiling floats would provide the needed resolution to greatly improve our understanding of biogeochemical processes and to enable significant improvement in ecosystem models. With an endurance of four years for a Biogeochemical-Argo float, this system would require the procurement and deployment of 250 new floats per year to maintain a 1000 float array. The lifetime cost for a Biogeochemical-Argo float, including capital expense, calibration, data management, and data transmission, is about $100,000. A global Biogeochemical-Argo system would thus cost about $25,000,000 annually. In the present Argo paradigm, the US provides half of the profiling floats in the array, while the EU, Austral/Asia, and Canada share most the remaining half. If this approach is adopted, the US cost for the Biogeochemical-Argo system would be ~$12,500,000 annually and ~$6,250,000 each for the EU, and Austral/Asia and Canada. This includes no direct costs for ship time and presumes that float deployments can be carried out from future research cruises of opportunity, including, for example, the international GO-SHIP program (http://www.go-ship.org). The full-scale implementation of a global Biogeochemical-Argo system with 1000 floats is feasible within a decade. The successful, ongoing pilot projects have provided the foundation and start for such a system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite the wide swath of applications where multiphase fluid contact lines exist, there is still no consensus on an accurate and general simulation methodology. Most prior numerical work has imposed one of the many dynamic contact-angle theories at solid walls. Such approaches are inherently limited by the theory accuracy. In fact, when inertial effects are important, the contact angle may be history dependent and, thus, any single mathematical function is inappropriate. Given these limitations, the present work has two primary goals: 1) create a numerical framework that allows the contact angle to evolve naturally with appropriate contact-line physics and 2) develop equations and numerical methods such that contact-line simulations may be performed on coarse computational meshes.

Fluid flows affected by contact lines are dominated by capillary stresses and require accurate curvature calculations. The level set method was chosen to track the fluid interfaces because it is easy to calculate interface curvature accurately. Unfortunately, the level set reinitialization suffers from an ill-posed mathematical problem at contact lines: a ``blind spot'' exists. Standard techniques to handle this deficiency are shown to introduce parasitic velocity currents that artificially deform freely floating (non-prescribed) contact angles. As an alternative, a new relaxation equation reinitialization is proposed to remove these spurious velocity currents and its concept is further explored with level-set extension velocities.

To capture contact-line physics, two classical boundary conditions, the Navier-slip velocity boundary condition and a fixed contact angle, are implemented in direct numerical simulations (DNS). DNS are found to converge only if the slip length is well resolved by the computational mesh. Unfortunately, since the slip length is often very small compared to fluid structures, these simulations are not computationally feasible for large systems. To address the second goal, a new methodology is proposed which relies on the volumetric-filtered Navier-Stokes equations. Two unclosed terms, an average curvature and a viscous shear VS, are proposed to represent the missing microscale physics on a coarse mesh.

All of these components are then combined into a single framework and tested for a water droplet impacting a partially-wetting substrate. Very good agreement is found for the evolution of the contact diameter in time between the experimental measurements and the numerical simulation. Such comparison would not be possible with prior methods, since the Reynolds number Re and capillary number Ca are large. Furthermore, the experimentally approximated slip length ratio is well outside of the range currently achievable by DNS. This framework is a promising first step towards simulating complex physics in capillary-dominated flows at a reasonable computational expense.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este trabalho pretende estudar a reação militar da I República quando confrontada com a proclamação da Monarquia do Norte. A fim de cumprir este desiderato, optou-se por dividir o problema em três partes que são a organização das forças republicanas, a sua evolução ao longo do conflito e a forma como foram conduzidas as operações militares. Recorreu-se a uma metodologia baseada no método indutivo consultando fontes arquivísticas e livros sobre o tema em estudo, utilizando uma estratégia de investigação qualitativa e empregando a pesquisa histórica como desenho de pesquisa. Conclui-se que a República reagiu militarmente contra a Monarquia do Norte criando forças em operações à custa dos efetivos existentes nas áreas envolvidas no conflito e reforçando estas unidades com efetivos mobilizados entre militares licenciados e voluntários civis para prosseguir as operações, que se desenrolaram numa lógica de contenção inicial do avanço das forças monárquicas para Sul, seguida de uma repulsão dessas forças para Norte do rio Douro e finalizado pela sua expulsão total do território nacional. Abstract: The purpose of this paper is to study the military response of the First Portuguese Republic when faced with the proclamation of the monarchy in the north of the country. In order to meet this goal, we decided to divide the problem into three parts which are the organization of the republican forces, their evolution over the conflict and how the military operations were conducted. We used a methodology based on the inductive method referring to archival sources and studies about our subject, using a qualitative research strategy and employing historical research as the research design. We conclude that the Republic reacted militarily against the Northern Monarchy creating forces at the expense of existing units in the areas involved in the conflict and strengthening these units with units mobilized among military licensees and civilian volunteers to continue the operations, which took place in three phases: initial containment of the advance of monarchical forces towards the south, followed by a repulsion of these forces to the north of the Douro river and finalized by their total expulsion from the country.