933 resultados para evolving
Resumo:
The interactive artwork Temporal arose from a series of art-science investigations with some of Australia’s leading flying fox ecologists. It was designed as a gently evolving meditation upon the complex, periodic processes that mark Australia’s often irregular seasonal changes. In turn these changes directly govern the migratory movements of Australia’s keystone pollinating mammals - the mega bats (Flying Foxes). Temporal further called attention to our increasing capacity to profoundly disturb these partners within Australia’s complex, life-supporting systems
Resumo:
The emergence of new technologies has revolutionized the way companies interact and build relationships with customers. The channel–customer relationship has traditionally been managed via a push approach in communication (“What can we sell customers?”) with the hope of cultivating customer loyalty. However, emotional understandings of customers and how they feel about a product, service, or business can drastically alter consumers’ engagement, behavior, and purchasing preferences. This rapidly evolving landscape has left managers at a loss, and what they are experiencing is likely the beginning of a tectonic shift in the way digital channels are designed, monitored, and managed. In this article, digital channel relationships are examined, and useful concepts for clarifying and refining the emotional meaning behind company strategy and their relationship to corresponding digital channels are detailed. Using three case study examples, we discuss the process and impact of such emotionally aware digital channel designs. Recommendations are made regarding how companies can select, design, and maintain digital engagements based on their strategy and industry needs.
Resumo:
Reductionist thinking will no longer suffice to address contemporary, complex challenges that defy sectoral, national, or disciplinary boundaries. Furthermore, lessons learned from the past cannot be confidently used to predict outcomes or help guide future actions. The authors propose that the confluence of a number of technology and social disruptors presents a pivotal moment in history to enable real-time, accelerated and integrated action that can adequately support a ‘future earth’ through transformational solutions. Building on more than a decade of dialogues hosted by the International Society for Digital Earth (ISDE), and evolving a briefing note presented to delegates of Pivotal2015, the paper presents an emergent context for collectively addressing spatial information, sustainable development and good governance through three guiding principles for enabling prosperous living in the 21st Century. These are: (1) open data, (2) real world context and (3) informed visualization for decision support. The paper synthesizes an interdisciplinary dialogue to create a credible and positive future vision of collaborative and transparent action for the betterment of humanity and planet. It is intended that the three Pivotal Principles can be used as an elegant framework for action towards the Digital Earth vision, across local, regional, and international communities and organizations.
Resumo:
Emerging embedded applications are based on evolving standards (e.g., MPEG2/4, H.264/265, IEEE802.11a/b/g/n). Since most of these applications run on handheld devices, there is an increasing need for a single chip solution that can dynamically interoperate between different standards and their derivatives. In order to achieve high resource utilization and low power dissipation, we propose REDEFINE, a polymorphic ASIC in which specialized hardware units are replaced with basic hardware units that can create the same functionality by runtime re-composition. It is a ``future-proof'' custom hardware solution for multiple applications and their derivatives in a domain. In this article, we describe a compiler framework and supporting hardware comprising compute, storage, and communication resources. Applications described in high-level language (e.g., C) are compiled into application substructures. For each application substructure, a set of compute elements on the hardware are interconnected during runtime to form a pattern that closely matches the communication pattern of that particular application. The advantage is that the bounded CEs are neither processor cores nor logic elements as in FPGAs. Hence, REDEFINE offers the power and performance advantage of an ASIC and the hardware reconfigurability and programmability of that of an FPGA/instruction set processor. In addition, the hardware supports custom instruction pipelining. Existing instruction-set extensible processors determine a sequence of instructions that repeatedly occur within the application to create custom instructions at design time to speed up the execution of this sequence. We extend this scheme further, where a kernel is compiled into custom instructions that bear strong producer-consumer relationship (and not limited to frequently occurring sequences of instructions). Custom instructions, realized as hardware compositions effected at runtime, allow several instances of the same to be active in parallel. A key distinguishing factor in majority of the emerging embedded applications is stream processing. To reduce the overheads of data transfer between custom instructions, direct communication paths are employed among custom instructions. In this article, we present the overview of the hardware-aware compiler framework, which determines the NoC-aware schedule of transports of the data exchanged between the custom instructions on the interconnect. The results for the FFT kernel indicate a 25% reduction in the number of loads/stores, and throughput improves by log(n) for n-point FFT when compared to sequential implementation. Overall, REDEFINE offers flexibility and a runtime reconfigurability at the expense of 1.16x in power and 8x in area when compared to an ASIC. REDEFINE implementation consumes 0.1x the power of an FPGA implementation. In addition, the configuration overhead of the FPGA implementation is 1,000x more than that of REDEFINE.
Resumo:
By “phenotypic plasticity” we refer to the capacity of a genotype to exhibit different phenotypes, whether in the same or in different environments. We have previously demonstrated that phenotypic plasticity can improve the degree of adaptation achieved via natural selection (Behera & Nanjundiah, 1995). That result was obtained from a genetic algorithm model of haploid genotypes (idealized as one-dimensional strings of genes) evolving in a fixed environment. Here, the dynamics of evolution is examined under conditions of a cyclically varying environment. We find that the rate of evolution, as well as the extent of adaptation (as measured by mean population fitness) is lowered because of environmental cycling. The decrease is adaptation caused by a varying environment can, however, be partly or wholly compensated by an increase in the degree of plasticity that a genotype is capable of. Also, the reduction of population fitness caused by a variable environment can be partially offset by decreasing the total number of genetic loci. We conjecture that an increase in genome size may have been among the factors responsible for the evolution of phenotypic plasticity.
Resumo:
Malaria causes a worldwide annual mortality of about a million people.Rapidly evolving drug-resistant species of the parasite have created a pressing need for the identification of new drug targets and vaccine candidates. By developing fractionation protocols to enrich parasites from low-parasitemia patient samples, we have carried out the first ever proteomics analysis of clinical isolates of early stages of Plasmodium falciparum (Pf) and P. vivax. Patient-derived malarial parasites were directly processed and analyzed using shotgun proteomics approach using high-sensitivity MS for protein identification. Our study revealed about 100 parasite-coded gene products that included many known drug targets such as Pf hypoxanthine guanine phosphoribosyl transferase, Pf L-lactate dehydrogenase, and Plasmepsins. In addition,our study reports the expression of several parasite proteins in clinical ring stages that have never been reported in the ring stages of the laboratory-cultivated parasite strain. This proof-of-principle study represents a noteworthy step forward in our understanding of pathways elaborated by the parasite within the malaria patient and will pave the way towards identification of new drug and vaccine targets that can aid malaria therapy.
Resumo:
A new framework is proposed in this work to solve multidimensional population balance equations (PBEs) using the method of discretization. A continuous PBE is considered as a statement of evolution of one evolving property of particles and conservation of their n internal attributes. Discretization must therefore preserve n + I properties of particles. Continuously distributed population is represented on discrete fixed pivots as in the fixed pivot technique of Kumar and Ramkrishna [1996a. On the solution of population balance equation by discretization-I A fixed pivot technique. Chemical Engineering Science 51(8), 1311-1332] for 1-d PBEs, but instead of the earlier extensions of this technique proposed in the literature which preserve 2(n) properties of non-pivot particles, the new framework requires n + I properties to be preserved. This opens up the use of triangular and tetrahedral elements to solve 2-d and 3-d PBEs, instead of the rectangles and cuboids that are suggested in the literature. Capabilities of computational fluid dynamics and other packages available for generating complex meshes can also be harnessed. The numerical results obtained indeed show the effectiveness of the new framework. It also brings out the hitherto unknown role of directionality of the grid in controlling the accuracy of the numerical solution of multidimensional PBEs. The numerical results obtained show that the quality of the numerical solution can be improved significantly just by altering the directionality of the grid, which does not require any increase in the number of points, or any refinement of the grid, or even redistribution of pivots in space. Directionality of a grid can be altered simply by regrouping of pivots.
Resumo:
The aim of the studies was to improve the diagnostic capability of electrocardiography (ECG) in detecting myocardial ischemic injury with a future goal of an automatic screening and monitoring method for ischemic heart disease. The method of choice was body surface potential mapping (BSPM), containing numerous leads, with intention to find the optimal recording sites and optimal ECG variables for ischemia and myocardial infarction (MI) diagnostics. The studies included 144 patients with prior MI, 79 patients with evolving ischemia, 42 patients with left ventricular hypertrophy (LVH), and 84 healthy controls. Study I examined the depolarization wave in prior MI with respect to MI location. Studies II-V examined the depolarization and repolarization waves in prior MI detection with respect to the Minnesota code, Q-wave status, and study V also with respect to MI location. In study VI the depolarization and repolarization variables were examined in 79 patients in the face of evolving myocardial ischemia and ischemic injury. When analyzed from a single lead at any recording site the results revealed superiority of the repolarization variables over the depolarization variables and over the conventional 12-lead ECG methods, both in the detection of prior MI and evolving ischemic injury. The QT integral, covering both depolarization and repolarization, appeared indifferent to the Q-wave status, the time elapsed from MI, or the MI or ischemia location. In the face of evolving ischemic injury the performance of the QT integral was not hampered even by underlying LVH. The examined depolarization and repolarization variables were effective when recorded in a single site, in contrast to the conventional 12-lead ECG criteria. The inverse spatial correlation of the depolarization and depolarization waves in myocardial ischemia and injury could be reduced into the QT integral variable recorded in a single site on the left flank. In conclusion, the QT integral variable, detectable in a single lead, with optimal recording site on the left flank, was able to detect prior MI and evolving ischemic injury more effectively than the conventional ECG markers. The QT integral, in a single-lead or a small number of leads, offers potential for automated screening of ischemic heart disease, acute ischemia monitoring and therapeutic decision-guiding as well as risk stratification.
Resumo:
Conventional invasive coronary angiography is the clinical gold standard for detecting of coronary artery stenoses. Noninvasive multidetector computed tomography (MDCT) in combination with retrospective ECG gating has recently been shown to permit visualization of the coronary artery lumen and detection of coronary artery stenoses. Single photon emission tomography (SPECT) perfusion imaging has been considered the reference method for evaluation of nonviable myocardium, but magnetic resonance imaging (MRI) can accurately depict structure, function, effusion, and myocardial viability, with an overall capacity unmatched by any other single imaging modality. Magnetocardiography (MCG) provides noninvasively information about myocardial excitation propagation and repolarization without the use of electrodes. This evolving technique may be considered the magnetic equivalent to electrocardiography. The aim of the present series of studies was to evaluate changes in the myocardium assessed with SPECT and MRI caused by coronary artery disease, examine the capability of multidetector computed tomography coronary angiography (MDCT-CA) to detect significant stenoses in the coronary arteries, and MCG to assess remote myocardial infarctions. Our study showed that in severe, progressing coronary artery disease laser treatment does not improve global left ventricular function or myocardial perfusion, but it does preserve systolic wall thickening in fixed defects (scar). It also prevents changes from ischemic myocardial regions to scar. The MCG repolarization variables are informative in remote myocardial infarction, and may perform as well as the conventional QRS criteria in detection of healed myocardial infarction. These STT abnormalities are more pronounced in patients with Q-wave infarction than in patients with non-Q-wave infarctions. MDCT-CA had a sensitivity of 82%, a specificity of 94%, a positive predictive value of 79%, and a negative predictive value of 95% for stenoses over 50% in the main coronary arteries as compared with conventional coronary angiography in patients with known coronary artery disease. Left ventricular wall dysfunction, perfusion defects, and infarctions were detected in 50-78% of sectors assigned to calcifications or stenoses, but also in sectors supplied by normally perfused coronary arteries. Our study showed a low sensitivity (sensitivity 63%) in detecting obstructive coronary artery disease assessed by MDCT in patients with severe aortic stenosis. Massive calcifications complicated correct assessment of the lumen of coronary arteries.
Resumo:
Modern-day weather forecasting is highly dependent on Numerical Weather Prediction (NWP) models as the main data source. The evolving state of the atmosphere with time can be numerically predicted by solving a set of hydrodynamic equations, if the initial state is known. However, such a modelling approach always contains approximations that by and large depend on the purpose of use and resolution of the models. Present-day NWP systems operate with horizontal model resolutions in the range from about 40 km to 10 km. Recently, the aim has been to reach operationally to scales of 1 4 km. This requires less approximations in the model equations, more complex treatment of physical processes and, furthermore, more computing power. This thesis concentrates on the physical parameterization methods used in high-resolution NWP models. The main emphasis is on the validation of the grid-size-dependent convection parameterization in the High Resolution Limited Area Model (HIRLAM) and on a comprehensive intercomparison of radiative-flux parameterizations. In addition, the problems related to wind prediction near the coastline are addressed with high-resolution meso-scale models. The grid-size-dependent convection parameterization is clearly beneficial for NWP models operating with a dense grid. Results show that the current convection scheme in HIRLAM is still applicable down to a 5.6 km grid size. However, with further improved model resolution, the tendency of the model to overestimate strong precipitation intensities increases in all the experiment runs. For the clear-sky longwave radiation parameterization, schemes used in NWP-models provide much better results in comparison with simple empirical schemes. On the other hand, for the shortwave part of the spectrum, the empirical schemes are more competitive for producing fairly accurate surface fluxes. Overall, even the complex radiation parameterization schemes used in NWP-models seem to be slightly too transparent for both long- and shortwave radiation in clear-sky conditions. For cloudy conditions, simple cloud correction functions are tested. In case of longwave radiation, the empirical cloud correction methods provide rather accurate results, whereas for shortwave radiation the benefit is only marginal. Idealised high-resolution two-dimensional meso-scale model experiments suggest that the reason for the observed formation of the afternoon low level jet (LLJ) over the Gulf of Finland is an inertial oscillation mechanism, when the large-scale flow is from the south-east or west directions. The LLJ is further enhanced by the sea-breeze circulation. A three-dimensional HIRLAM experiment, with a 7.7 km grid size, is able to generate a similar LLJ flow structure as suggested by the 2D-experiments and observations. It is also pointed out that improved model resolution does not necessary lead to better wind forecasts in the statistical sense. In nested systems, the quality of the large-scale host model is really important, especially if the inner meso-scale model domain is small.
Resumo:
Strong motion array records are analyzed in this paper to identify and map the source zone of four past earthquakes. The source is represented as a sequence of double couples evolving as ramp functions, triggering at different instants, distributed in a region yet to be mapped. The known surface level ground motion time histories are treated as responses to the unknown double couples on the fault surface. The location, orientation, magnitude, and risetime of the double couples are found by minimizing the mean square error between analytical solution and instrumental data. Numerical results are presented for Chi-Chi, Imperial Valley, San Fernando, and Uttarakashi earthquakes. Results obtained are in good agreement with field investigations and those obtained from conventional finite fault source inversions.
Resumo:
Chronic inflammation is now recognized as a major cause of malignant disease. In concert with various mechanisms (including DNA instability), hypoxia and activation of inflammatory bioactive lipid pathways and pro-inflammatory cytokines open the doorway to malignant transformation and proliferation, angiogenesis, and metastasis in many cancers. A balance between stimulatory and inhibitory signals regulates the immune response to cancer. These include inhibitory checkpoints that modulate the extent and duration of the immune response and may be activated by tumor cells. This contributes to immune resistance, especially against tumor antigen-specific T-cells. Targeting these checkpoints is an evolving approach to cancer immunotherapy, designed to foster an immune response. The current focus of these trials is on the programmed cell death protein 1 (PD-1) receptor and its ligands (PD-L1, PD-L2) and cytotoxic T-lymphocyte-associated protein 4 (CTLA-4). Researchers have developed anti-PD-1 and anti-PDL-1 antibodies that interfere with the ligands and receptor and allow the tumor cell to be recognized and attacked by tumor-infiltrating T-cells. These are currently being studied in lung cancer. Likewise, CTLA-4 inhibitors, which have had success treating advanced melanoma, are being studied in lung cancer with encouraging results.
Resumo:
Acceleration of the universe has been established but not explained. During the past few years precise cosmological experiments have confirmed the standard big bang scenario of a flat universe undergoing an inflationary expansion in its earliest stages, where the perturbations are generated that eventually form into galaxies and other structure in matter, most of which is non-baryonic dark matter. Curiously, the universe has presently entered into another period of acceleration. Such a result is inferred from observations of extra-galactic supernovae and is independently supported by the cosmic microwave background radiation and large scale structure data. It seems there is a positive cosmological constant speeding up the universal expansion of space. Then the vacuum energy density the constant describes should be about a dozen times the present energy density in visible matter, but particle physics scales are enormously larger than that. This is the cosmological constant problem, perhaps the greatest mystery of contemporary cosmology. In this thesis we will explore alternative agents of the acceleration. Generically, such are called dark energy. If some symmetry turns off vacuum energy, its value is not a problem but one needs some dark energy. Such could be a scalar field dynamically evolving in its potential, or some other exotic constituent exhibiting negative pressure. Another option is to assume that gravity at cosmological scales is not well described by general relativity. In a modified theory of gravity one might find the expansion rate increasing in a universe filled by just dark matter and baryons. Such possibilities are taken here under investigation. The main goal is to uncover observational consequences of different models of dark energy, the emphasis being on their implications for the formation of large-scale structure of the universe. Possible properties of dark energy are investigated using phenomenological paramaterizations, but several specific models are also considered in detail. Difficulties in unifying dark matter and dark energy into a single concept are pointed out. Considerable attention is on modifications of gravity resulting in second order field equations. It is shown that in a general class of such models the viable ones represent effectively the cosmological constant, while from another class one might find interesting modifications of the standard cosmological scenario yet allowed by observations. The thesis consists of seven research papers preceded by an introductory discussion.
Resumo:
Social work in health care has been established for more than 100 years and is one of the largest areas of practice for social workers. Over time, demographic changes and growth in the aging population, increased longevity rates, an explosion in rates of chronic illness together with rapidly increasing cost of health care have created serious challenges for acute hospitals and health social workers. This article reviews the Australian health care system and policies with particular emphasis on the public hospital system. It then examines current hospital social work roles, including the continued role in discharge planning and expanding responsibility for emerging client problems, such as patient complexity, legal, and carer issues. The article concludes with a discussion of evolving issues and challenges facing health social work to ensure that social work remain relevant within this practice context.
Resumo:
The future of civic engagement is characterised by both technological innovation as well as new technological user practices that are fuelled by trends towards mobile, personal devices; broadband connectivity; open data; urban interfaces; and cloud computing. These technology trends are progressing at a rapid pace, and have led global technology vendors to package and sell the “Smart City” as a centralised service delivery platform predicted to optimise and enhance cities’ key performance indicators – and generate a profitable market. The top-down deployment of these large and proprietary technology platforms have helped sectors such as energy, transport, and healthcare to increase efficiencies. However, an increasing number of scholars and commentators warn of another “IT bubble” emerging. Along with some city leaders, they argue that the top-down approach does not fit the governance dynamics and values of a liberal democracy when applied across sectors. A thorough understanding is required, of the socio-cultural nuances of how people work, live, play across different environments, and how they employ social media and mobile devices to interact with, engage in, and constitute public realms. Although the term “slacktivism” is sometimes used to denote a watered down version of civic engagement and activism that is reduced to clicking a “Like” button and signing online petitions, we believe that we are far from witnessing another Biedermeier period that saw people focus on the domestic and the non-political. There is plenty of evidence to the contrary, such as post-election violence in Kenya in 2008, the Occupy movements in New York, Hong Kong and elsewhere, the Arab Spring, Stuttgart 21, Fukushima, the Taksim Gezi Park in Istanbul, and the Vinegar Movement in Brazil in 2013. These examples of civic action shape the dynamics of governments, and in turn, call for new processes to be incorporated into governance structures. Participatory research into these new processes across the triad of people, place and technology is a significant and timely investment to foster productive, sustainable, and liveable human habitats. With this article, we want to reframe the current debates in academia and priorities in industry and government to allow citizens and civic actors to take their rightful centrepiece place in civic movements. This calls for new participatory approaches for co-inquiry and co-design. It is an evolving process with an explicit agenda to facilitate change, and we propose participatory action research (PAR) as an indispensable component in the journey to develop new governance infrastructures and practices for civic engagement. We do not limit our definition of civic technologies to tools specifically designed to simply enhance government and governance, such as renewing your car registration online or casting your vote electronically on election day. Rather, we are interested in civic media and technologies that foster citizen engagement in the widest sense, and particularly the participatory design of such civic technologies that strive to involve citizens in political debate and action as well as question conventional approaches to political issues. The rationale for this approach is an alternative to smart cities in a “perpetual tomorrow,” based on many weak and strong signals of civic actions revolving around technology seen today. It seeks to emphasise and direct attention to active citizenry over passive consumerism, human actors over human factors, culture over infrastructure, and prosperity over efficiency. First, we will have a look at some fundamental issues arising from applying simplistic smart city visions to the kind of a problem a city poses. We focus on the touch points between “the city” and its civic body, the citizens. In order to provide for meaningful civic engagement, the city must provide appropriate interfaces.