949 resultados para To-failure Method


Relevância:

90.00% 90.00%

Publicador:

Resumo:

We study the dynamical states of a small-world network of recurrently coupled excitable neurons, through both numerical and analytical methods. The dynamics of this system depend mostly on both the number of long-range connections or ?shortcuts?, and the delay associated with neuronal interactions. We find that persistent activity emerges at low density of shortcuts, and that the system undergoes a transition to failure as their density reaches a critical value. The state of persistent activity below this transition consists of multiple stable periodic attractors, whose number increases at least as fast as the number of neurons in the network. At large shortcut density and for long enough delays the network dynamics exhibit exceedingly long chaotic transients, whose failure times follow a stretched exponential distribution. We show that this functional form arises for the ensemble-averaged activity if the failure time for each individual network realization is exponen- tially distributed

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents seventy new experimental results from PMMA notched specimens tested under torsion at 60 C. The notch root radius ranges from 0.025 to 7.0 mm. At this temperature the non-linear effects previously observed on specimens of the same material tested at room temperature strongly reduce. The averaged value of the strain energy density over a control volume is used to assess the critical loads to failure. The radius of the control volume and the critical strain energy density are evaluated a priori by using in combination the mode III critical stress intensity factor from cracked-like specimens and the critical stress to failure detected from semicircular notches with a large notch root radius

Relevância:

90.00% 90.00%

Publicador:

Resumo:

When we look to perform a work for developing a framework to create a business and take it correctly, there are always some persons looking as a challenge those bases and finding a mistake. The way to work in these situations is not a matter of law, is a matter of devoting time to identify these situations. It is always said that the evil goes a step ahead. The business ethics have been altered for quite time by some would-be entrepreneurs. These people have learned to play with business ethics to show your business as prosperous as something that is sought to highlight and adulterate their results quickly. Once the company reaches an international dimension, many companies take on global responsibility and, in these cases where you can see if the objective has been to obtain a rapid capital increase or growth is in line with its proportions. A business ethics is based on establishing a strong base so that interest is encouraged from an early time. Good staff, organizational level should be achieved and not only at the company but, out of the company too. Thus, you can create a secure base to convince potential investors and employees about the business. There are no freeways in business ethics and all fast track can be or a genius or leads to failure. We must find where these jumps are occurring, such errors or corrections to business ethics and their rules. Thus we can differentiate a company or an entrepreneur who is working correctly from the cloaking. Starting from the basics of business ethics and studying the different levels from the personal to the prospect that the company shows in the world. Lets see where these changes are occurring and how we can fight against them and anticipate the market to possible cases of fraud or strange movements seeking to attract the unwary

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A coupled elastoplastic-damage constitutive model with Lode angle dependent failure criterion for high strain and ballistic applications is presented. A Lode angle dependent function is added to the equivalent plastic strain to failure definition of the Johnson–Cook failure criterion. The weakening in the elastic law and in the Johnson–Cook-like constitutive relation implicitly introduces the Lode angle dependency in the elastoplastic behaviour. The material model is calibrated for precipitation hardened Inconel 718 nickel-base superalloy. The combination of a Lode angle dependent failure criterion with weakened constitutive equations is proven to predict fracture patterns of the mechanical tests performed and provide reliable results. Additionally, the mesh size dependency on the prediction of the fracture patterns was studied, showing that was crucial to predict such patterns

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This article investigates experimentally the application of health monitoring techniques to assess the damage on a particular kind of hysteretic (metallic) damper called web plastifying dampers, which are subjected to cyclic loading. In general terms, hysteretic dampers are increasingly used as passive control systems in advanced earthquake-resistant structures. Nonparametric statistical processing of the signals obtained from simple vibration tests of the web plastifying damper is used here to propose an area index damage. This area index damage is compared with an alternative energy-based index of damage proposed in past research that is based on the decomposition of the load?displacement curve experienced by the damper. Index of damage has been proven to accurately predict the level of damage and the proximity to failure of web plastifying damper, but obtaining the load?displacement curve for its direct calculation requires the use of costly instrumentation. For this reason, the aim of this study is to estimate index of damage indirectly from simple vibration tests, calling for much simpler and cheaper instrumentation, through an auxiliary index called area index damage. Web plastifying damper is a particular type of hysteretic damper that uses the out-of-plane plastic deformation of the web of I-section steel segments as a source of energy dissipation. Four I-section steel segments with similar geometry were subjected to the same pattern of cyclic loading, and the damage was evaluated with the index of damage and area index damage indexes at several stages of the loading process. A good correlation was found between area index damage and index of damage. Based on this correlation, simple formulae are proposed to estimate index of damage from the area index damage.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

El auge del "Internet de las Cosas" (IoT, "Internet of Things") y sus tecnologías asociadas han permitido su aplicación en diversos dominios de la aplicación, entre los que se encuentran la monitorización de ecosistemas forestales, la gestión de catástrofes y emergencias, la domótica, la automatización industrial, los servicios para ciudades inteligentes, la eficiencia energética de edificios, la detección de intrusos, la gestión de desastres y emergencias o la monitorización de señales corporales, entre muchas otras. La desventaja de una red IoT es que una vez desplegada, ésta queda desatendida, es decir queda sujeta, entre otras cosas, a condiciones climáticas cambiantes y expuestas a catástrofes naturales, fallos de software o hardware, o ataques maliciosos de terceros, por lo que se puede considerar que dichas redes son propensas a fallos. El principal requisito de los nodos constituyentes de una red IoT es que estos deben ser capaces de seguir funcionando a pesar de sufrir errores en el propio sistema. La capacidad de la red para recuperarse ante fallos internos y externos inesperados es lo que se conoce actualmente como "Resiliencia" de la red. Por tanto, a la hora de diseñar y desplegar aplicaciones o servicios para IoT, se espera que la red sea tolerante a fallos, que sea auto-configurable, auto-adaptable, auto-optimizable con respecto a nuevas condiciones que puedan aparecer durante su ejecución. Esto lleva al análisis de un problema fundamental en el estudio de las redes IoT, el problema de la "Conectividad". Se dice que una red está conectada si todo par de nodos en la red son capaces de encontrar al menos un camino de comunicación entre ambos. Sin embargo, la red puede desconectarse debido a varias razones, como que se agote la batería, que un nodo sea destruido, etc. Por tanto, se hace necesario gestionar la resiliencia de la red con el objeto de mantener la conectividad entre sus nodos, de tal manera que cada nodo IoT sea capaz de proveer servicios continuos, a otros nodos, a otras redes o, a otros servicios y aplicaciones. En este contexto, el objetivo principal de esta tesis doctoral se centra en el estudio del problema de conectividad IoT, más concretamente en el desarrollo de modelos para el análisis y gestión de la Resiliencia, llevado a la práctica a través de las redes WSN, con el fin de mejorar la capacidad la tolerancia a fallos de los nodos que componen la red. Este reto se aborda teniendo en cuenta dos enfoques distintos, por una parte, a diferencia de otro tipo de redes de dispositivos convencionales, los nodos en una red IoT son propensos a perder la conexión, debido a que se despliegan en entornos aislados, o en entornos con condiciones extremas; por otra parte, los nodos suelen ser recursos con bajas capacidades en términos de procesamiento, almacenamiento y batería, entre otros, por lo que requiere que el diseño de la gestión de su resiliencia sea ligero, distribuido y energéticamente eficiente. En este sentido, esta tesis desarrolla técnicas auto-adaptativas que permiten a una red IoT, desde la perspectiva del control de su topología, ser resiliente ante fallos en sus nodos. Para ello, se utilizan técnicas basadas en lógica difusa y técnicas de control proporcional, integral y derivativa (PID - "proportional-integral-derivative"), con el objeto de mejorar la conectividad de la red, teniendo en cuenta que el consumo de energía debe preservarse tanto como sea posible. De igual manera, se ha tenido en cuenta que el algoritmo de control debe ser distribuido debido a que, en general, los enfoques centralizados no suelen ser factibles a despliegues a gran escala. El presente trabajo de tesis implica varios retos que conciernen a la conectividad de red, entre los que se incluyen: la creación y el análisis de modelos matemáticos que describan la red, una propuesta de sistema de control auto-adaptativo en respuesta a fallos en los nodos, la optimización de los parámetros del sistema de control, la validación mediante una implementación siguiendo un enfoque de ingeniería del software y finalmente la evaluación en una aplicación real. Atendiendo a los retos anteriormente mencionados, el presente trabajo justifica, mediante una análisis matemático, la relación existente entre el "grado de un nodo" (definido como el número de nodos en la vecindad del nodo en cuestión) y la conectividad de la red, y prueba la eficacia de varios tipos de controladores que permiten ajustar la potencia de trasmisión de los nodos de red en respuesta a eventuales fallos, teniendo en cuenta el consumo de energía como parte de los objetivos de control. Así mismo, este trabajo realiza una evaluación y comparación con otros algoritmos representativos; en donde se demuestra que el enfoque desarrollado es más tolerante a fallos aleatorios en los nodos de la red, así como en su eficiencia energética. Adicionalmente, el uso de algoritmos bioinspirados ha permitido la optimización de los parámetros de control de redes dinámicas de gran tamaño. Con respecto a la implementación en un sistema real, se han integrado las propuestas de esta tesis en un modelo de programación OSGi ("Open Services Gateway Initiative") con el objeto de crear un middleware auto-adaptativo que mejore la gestión de la resiliencia, especialmente la reconfiguración en tiempo de ejecución de componentes software cuando se ha producido un fallo. Como conclusión, los resultados de esta tesis doctoral contribuyen a la investigación teórica y, a la aplicación práctica del control resiliente de la topología en redes distribuidas de gran tamaño. Los diseños y algoritmos presentados pueden ser vistos como una prueba novedosa de algunas técnicas para la próxima era de IoT. A continuación, se enuncian de forma resumida las principales contribuciones de esta tesis: (1) Se han analizado matemáticamente propiedades relacionadas con la conectividad de la red. Se estudia, por ejemplo, cómo varía la probabilidad de conexión de la red al modificar el alcance de comunicación de los nodos, así como cuál es el mínimo número de nodos que hay que añadir al sistema desconectado para su re-conexión. (2) Se han propuesto sistemas de control basados en lógica difusa para alcanzar el grado de los nodos deseado, manteniendo la conectividad completa de la red. Se han evaluado diferentes tipos de controladores basados en lógica difusa mediante simulaciones, y los resultados se han comparado con otros algoritmos representativos. (3) Se ha investigado más a fondo, dando un enfoque más simple y aplicable, el sistema de control de doble bucle, y sus parámetros de control se han optimizado empleando algoritmos heurísticos como el método de la entropía cruzada (CE, "Cross Entropy"), la optimización por enjambre de partículas (PSO, "Particle Swarm Optimization"), y la evolución diferencial (DE, "Differential Evolution"). (4) Se han evaluado mediante simulación, la mayoría de los diseños aquí presentados; además, parte de los trabajos se han implementado y validado en una aplicación real combinando técnicas de software auto-adaptativo, como por ejemplo las de una arquitectura orientada a servicios (SOA, "Service-Oriented Architecture"). ABSTRACT The advent of the Internet of Things (IoT) enables a tremendous number of applications, such as forest monitoring, disaster management, home automation, factory automation, smart city, etc. However, various kinds of unexpected disturbances may cause node failure in the IoT, for example battery depletion, software/hardware malfunction issues and malicious attacks. So, it can be considered that the IoT is prone to failure. The ability of the network to recover from unexpected internal and external failures is known as "resilience" of the network. Resilience usually serves as an important non-functional requirement when designing IoT, which can further be broken down into "self-*" properties, such as self-adaptive, self-healing, self-configuring, self-optimization, etc. One of the consequences that node failure brings to the IoT is that some nodes may be disconnected from others, such that they are not capable of providing continuous services for other nodes, networks, and applications. In this sense, the main objective of this dissertation focuses on the IoT connectivity problem. A network is regarded as connected if any pair of different nodes can communicate with each other either directly or via a limited number of intermediate nodes. More specifically, this thesis focuses on the development of models for analysis and management of resilience, implemented through the Wireless Sensor Networks (WSNs), which is a challenging task. On the one hand, unlike other conventional network devices, nodes in the IoT are more likely to be disconnected from each other due to their deployment in a hostile or isolated environment. On the other hand, nodes are resource-constrained in terms of limited processing capability, storage and battery capacity, which requires that the design of the resilience management for IoT has to be lightweight, distributed and energy-efficient. In this context, the thesis presents self-adaptive techniques for IoT, with the aim of making the IoT resilient against node failures from the network topology control point of view. The fuzzy-logic and proportional-integral-derivative (PID) control techniques are leveraged to improve the network connectivity of the IoT in response to node failures, meanwhile taking into consideration that energy consumption must be preserved as much as possible. The control algorithm itself is designed to be distributed, because the centralized approaches are usually not feasible in large scale IoT deployments. The thesis involves various aspects concerning network connectivity, including: creation and analysis of mathematical models describing the network, proposing self-adaptive control systems in response to node failures, control system parameter optimization, implementation using the software engineering approach, and evaluation in a real application. This thesis also justifies the relations between the "node degree" (the number of neighbor(s) of a node) and network connectivity through mathematic analysis, and proves the effectiveness of various types of controllers that can adjust power transmission of the IoT nodes in response to node failures. The controllers also take into consideration the energy consumption as part of the control goals. The evaluation is performed and comparison is made with other representative algorithms. The simulation results show that the proposals in this thesis can tolerate more random node failures and save more energy when compared with those representative algorithms. Additionally, the simulations demonstrate that the use of the bio-inspired algorithms allows optimizing the parameters of the controller. With respect to the implementation in a real system, the programming model called OSGi (Open Service Gateway Initiative) is integrated with the proposals in order to create a self-adaptive middleware, especially reconfiguring the software components at runtime when failures occur. The outcomes of this thesis contribute to theoretic research and practical applications of resilient topology control for large and distributed networks. The presented controller designs and optimization algorithms can be viewed as novel trials of the control and optimization techniques for the coming era of the IoT. The contributions of this thesis can be summarized as follows: (1) Mathematically, the fault-tolerant probability of a large-scale stochastic network is analyzed. It is studied how the probability of network connectivity depends on the communication range of the nodes, and what is the minimum number of neighbors to be added for network re-connection. (2) A fuzzy-logic control system is proposed, which obtains the desired node degree and in turn maintains the network connectivity when it is subject to node failures. There are different types of fuzzy-logic controllers evaluated by simulations, and the results demonstrate the improvement of fault-tolerant capability as compared to some other representative algorithms. (3) A simpler but more applicable approach, the two-loop control system is further investigated, and its control parameters are optimized by using some heuristic algorithms such as Cross Entropy (CE), Particle Swarm Optimization (PSO), and Differential Evolution (DE). (4) Most of the designs are evaluated by means of simulations, but part of the proposals are implemented and tested in a real-world application by combining the self-adaptive software technique and the control algorithms which are presented in this thesis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents the experimental results obtained by applying frequency-domain structural health monitoring techniques to assess the damage suffered on a special type of damper called Web Plastifying Damper (WPD). The WPD is a hysteretic type energy dissipator recently developed for the passive control of structures subjected to earthquakes. It consists of several I-section steel segments connected in parallel. The energy is dissipated through plastic deformations of the web of the I-sections, which constitute the dissipative parts of the damper. WPDs were subjected to successive histories of dynamically-imposed cyclic deformations of increasing magnitude with the shaking table of the University of Granada. To assess the damage to the web of the I-section steel segments after each history of loading, a new damage index called Area Index of Damage (AID) was obtained from simple vibration tests. The vibration signals were acquired by means of piezoelectric sensors attached on the I-sections, and non-parametric statistical methods were applied to calculate AID in terms of changes in frequency response functions. The damage index AID was correlated with another energy-based damage index-ID- which past research has proven to accurately characterize the level of mechanical damage. The ID is rooted in the decomposition of the load-displacement curve experienced by the damper into the so-called skeleton and Bauschinger parts. ID predicts the level of damage and the proximity to failure of the damper accurately, but it requires costly instrumentation. The experiments reported in this paper demonstrate a good correlation between AID and ID in a realistic seismic loading scenario consisting of dynamically applied arbitrary cyclic loads. Based on this correlation, it is possible to estimate ID indirectly from the AID, which calls for much simpler and less expensive instrumentation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The yeast gene KEM1 (also named SEP1/DST2/XRN1/RAR5) produces a G4-DNA-dependent nuclease that binds to G4 tetraplex DNA structure and cuts in a single-stranded region 5' to the G4 structure. G4-DNA generated from yeast telomeric oligonucleotides competitively inhibits the cleavage reaction, suggesting that this enzyme may interact with yeast telomeres in vivo. Homozygous deletions of the KEM1 gene in yeast block meiosis at the pachytene stage, which is consistent with the hypothesis that G4 tetraplex DNA may be involved in homologous chromosome pairing during meiosis. We conjectured that the mitotic defects of kem1/sep1 mutant cells, such as a higher chromosome loss rate, are also due to failure in processing G4-DNA, especially at telomeres. Here we report two phenotypes associated with a kem1-null allele, cellular senescence and telomere shortening, that provide genetic evidence that G4 tetraplex DNA may play a role in telomere functioning. In addition, our results reveal that chromosome ends in the same cells behave differently in a fashion dependent on the KEM1 gene product.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Previously, researchers have speculated that genetic engineering can improve the long-term function of vascular grafts which are prone to atherosclerosis and occlusion. In this study, we demonstrated that an intraoperative gene therapy approach using antisense oligodeoxynucleotide blockage of medial smooth muscle cell proliferation can prevent the accelerated atherosclerosis that is responsible for autologous vein graft failure. Selective blockade of the expression of genes for two cell cycle regulatory proteins, proliferating cell nuclear antigen and cell division cycle 2 kinase, was achieved in the smooth muscle cells of rabbit jugular veins grafted into the carotid arteries. This alteration of gene expression successfully redirected vein graft biology away from neointimal hyperplasia and toward medial hypertrophy, yielding conduits that more closely resembled normal arteries. More importantly, these genetically engineered grafts proved resistant to diet-induced atherosclerosis. These findings establish the feasibility of developing genetically engineered bioprostheses that are resistant to failure and better suited to the long-term treatment of occlusive vascular disease.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The main objective of this paper is twofold: on the one hand, to analyse the impact that the announcement of the opening of a new hotel has on the performance of its chain by carrying out an event study, and on the other hand, to compare the results of two different approaches to this method: a parametric specification based on the autoregressive conditional heteroskedasticity models to estimate the market model, and a nonparametric approach, which implies employing Theil’s nonparametric regression technique, which in turn, leads to the so-called complete nonparametric approach to event studies. The results that the empirical application arrives at are noteworthy as, on average, the reaction to such news releases is highly positive, both approaches reaching the same level of significance. However, a word of caution must be said when one is not only interested in detecting whether the market reacts, but also in obtaining an exhaustive calculation of the abnormal returns to further examine its determining factors.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background. Low back pain is an increasing global health problem, which is associated with intervertebral disc (IVD) damage and degeneration. Major changes occur in the nucleus pulposus (NP), with the degradation of the extracellular matrix (ECM).1 Further studies showed that growth factors from transforming growth factor β (TGFβ) and bone morphogenic proteins (BMP) family may induce chondrogenic differentiation of mesenchymal stem cells (MSC).2 Focusing on non-viral gene therapies and their possible translation into the clinics, we investigated if GDF6 (syn. BMP13 or CDMP2) can induce regeneration of degraded NP. We hypothesized that IVD transfected with plasmid over-expressing GDF6 also up-regulates other NP- and chondrogenic cell markers and enhances ECM deposition. Methods. Bovine nucleus pulposus (bNPC) and annulus fibrosus cells (bAFC) were harvested from bovine coccygeal IVD. Primary cells were then electroporized with plasmid GDF6 (Origene, vector RG211366) by optimizing parameters using the Neon Transfection system (Life Technologies, Basel). After transfection, cells were cultured in 2D monolayer or 3D alginate beads for 7, 14 or 21 days. Transfection efficiency of pGDF6 was analyzed by immunohistochemistry and fluorescent microscopy. Cell phenotype was quantified by real-time RT-PCR. To test a non-viral gene therapy applied directly to 3D whole organ culture, coccygeal bovine IVDs were harvested as previously described. Bovine IVDs were transfected by injection of plasmid GDF6 into the center. Electroporation was performed with ECM830 Square Wave Electroporation System (Harvard Apparatus, MA) using 2-needle array electrode or tweezertrodes. 72 h after tranfection discs were fixed and cryosectioned and analyzed by immunofluorescence against GDF6. Results. RT-PCR and immunohistochemistry confirmed up-regulation of GFP and GDF6 in the primary bNPC/bAFC culture. The GFP-tagged GDF6 protein, however, was not visible, possibly due to failure of dimer formation as a result of fusion structure. Organ IVD culture transfection revealed GDF6 positive staining in the center of the disc using 2-needle array electrode. Results from tweezertrodes did not show any GDF6 positive cells. Conclusion. Non-viral transfection is an appealing approach for gene therapy as it fulfills the translational safety aspects of transiency and lacks the toxic effects of viral transduction. We identified novel parameters to successfully transfect primary bovine IVD cells. For transfection of whole IVD explants electroporation parameters need to be further optimized. Acknowledgements. This project was funded by the Lindenhof Foundation (Funds “Research & Teaching”) Project no. 13-02-F. The imaging part of this study was performed with the facility of the Microscopy Imaging Center (MIC), University of Bern. References. Roughly PJ (2004): Spine (Phila), 29:2691-2699 Clarke LE, McConell JC, Sherratt MJ, Derby B, Richardson SM, Hoyland JA (2014), Arthritis Research & Therapy, 16:R67

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In October 2000, Australia was declared poliomyelitis-free” by the World Health Organisation. This declaration followed some extensive six years of surveillance of all cases of acute flaccid paralysis, by the Poliomyelitis Expert Surveillance Committee (Centre for Disease Control, Commonwealth Department of Health and Aged Care, Canberra), chaired by the author. There have been seven attempts in the history of the world to eliminate a disease from the earth’s surface. The dramatic failure of five of these, the success of global smallpox eradication, and current successes and difficulties in the case of attempts to eliminate poliomyelitis worldwide, lead one to an analysis of the factors which led both to success and to failure. The global eradication of a specific disease is one of the most important endeavours which the international community can undertake. This audit reviews the details of such approaches; that such might be used as tools for the future.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objective: To critically analyse the proposed new psychiatric condition, demoralization syndrome, and the implications drawn by its proponents for the clinical-ethical status of requests by terminally ill patients for assistance to die. Method: The diagnostic features of demoralization syndrome, a proposed new psychiatric disorder, recognizable particularly in palliative care settings, are summarized. The consequences of proposed therapeutic interventions are described, one of which is relief of the desperation which motivates some demoralized patients to consider ending their lives and to seek assistance in dying. The connections between the proposed condition and the desire to die are analysed in the context of the continuing tensions surrounding the ontological status and sociopolitical implications of psychiatric categories and the pervasive medicalization of modern life. Results: The analysis suggests that by medicalizing existential cognitions at the end of life, the proposed diagnostic category also normalizes a particular moral view concerning assistance in dying. Conclusions: While further research into the issues described in this provisional syndrome may benefit some patients, the categorization of demoralization as a medical diagnosis is a questionable extension of psychiatry's influence, which could serve particular social, political and cultural views concerning the end of life.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: In mental health, policy-makers and planners are increasingly being asked to set priorities. This means that health economists, health services researchers and clinical investigators are being called upon to work together to define and measure costs. Typically, these researchers take available service utilisation data and convert them to costs, using a range of assumptions. There are inefficiencies, as individual groups of researchers frequently repeat essentially similar exercises in achieving this end. There are clearly areas where shared or common investment in the development of statistical software syntax, analytical frameworks and other resources could maximise the use of data. Aims of the Study: This paper reports on an Australian project in which we calculated unit costs for mental health admissions and community encounters. In reporting on these calculations, our purpose is to make the data and the resources associated with them publicly available to researchers interested in conducting economic analyses, and allow them to copy, distribute and modify them, providing that all copies and modifications are available under the same terms and conditions (i.e., in accordance with the 'Copyleft' principle), Within this context, the objectives of the paper are to: (i) introduce the 'Copyleft' principle; (ii) provide an overview of the methodology we employed to derive the unit costs; (iii) present the unit costs themselves; and (iv) examine the total and mean costs for a range of single and comorbid conditions, as an example of the kind of question that the unit cost data can be used to address. Method: We took relevant data from the Australian National Survey of Mental Health and Wellbeing (NSMHWB), and developed a set of unit costs for inpatient and community encounters. We then examined total and mean costs for a range of single and comorbid conditions. Results: We present the unit costs for mental health admissions and mental health community contacts. Our example, which explored the association between comorbidity and total and mean costs, suggested that comorbidly occurring conditions cost more than conditions which occur on their own. Discussion: Our unit costs, and the materials associated with them, have been published in a freely available form governed by a provision termed 'Copyleft'. They provide a valuable resource for researchers wanting to explore economic questions in mental health. Implications for Health Policies: Our unit costs provide an important resource to inform economic debate in mental health in Australia, particularly in the area of priority-setting. In the past, such debate has largely, been based on opinion. Our unit costs provide the underpinning to strengthen the evidence-base of this debate. Implications for Further Research: We would encourage other Australian researchers to make use of our unit costs in order to foster comparability across studies. We would also encourage Australian and international researchers to adopt the 'Copyleft' principle in equivalent circumstances. Furthermore, we suggest that the provision of 'Copyleft'-contingent funding to support the development of enabling resources for researchers should be considered in the planning of future large-scale collaborative survey work, both in Australia and overseas.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Despite the insight gained from 2-D particle models, and given that the dynamics of crustal faults occur in 3-D space, the question remains, how do the 3-D fault gouge dynamics differ from those in 2-D? Traditionally, 2-D modeling has been preferred over 3-D simulations because of the computational cost of solving 3-D problems. However, modern high performance computing architectures, combined with a parallel implementation of the Lattice Solid Model (LSM), provide the opportunity to explore 3-D fault micro-mechanics and to advance understanding of effective constitutive relations of fault gouge layers. In this paper, macroscopic friction values from 2-D and 3-D LSM simulations, performed on an SGI Altix 3700 super-cluster, are compared. Two rectangular elastic blocks of bonded particles, with a rough fault plane and separated by a region of randomly sized non-bonded gouge particles, are sheared in opposite directions by normally-loaded driving plates. The results demonstrate that the gouge particles in the 3-D models undergo significant out-of-plane motion during shear. The 3-D models also exhibit a higher mean macroscopic friction than the 2-D models for varying values of interparticle friction. 2-D LSM gouge models have previously been shown to exhibit accelerating energy release in simulated earthquake cycles, supporting the Critical Point hypothesis. The 3-D models are shown to also display accelerating energy release, and good fits of power law time-to-failure functions to the cumulative energy release are obtained.