48 resultados para single-case designs
Resumo:
This paper presents a clocking pipeline technique referred to as a single-pulse pipeline (PP-Pipeline) and applies it to the problem of mapping pipelined circuits to a Field Programmable Gate Array (FPGA). A PP-pipeline replicates the operation of asynchronous micropipelined control mechanisms using synchronous-orientated logic resources commonly found in FPGA devices. Consequently, circuits with an asynchronous-like pipeline operation can be efficiently synthesized using a synchronous design methodology. The technique can be extended to include data-completion circuitry to take advantage of variable data-completion processing time in synchronous pipelined designs. It is also shown that the PP-pipeline reduces the clock tree power consumption of pipelined circuits. These potential applications are demonstrated by post-synthesis simulation of FPGA circuits. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Using the 1: 2 condensate of benzildihydrazone and 2-acetylpyridine as a tetradentate N donor ligand L, LaL(NO3)(3) (1) and EuL(NO3)(3) (2), which are pale yellow in colour, are synthesized. While single crystals of 1 could not be obtained, 2 crystallises as a monodichloromethane solvate, 2 center dot CH2Cl2 in the space group Cc with a = 11.7099(5) angstrom, b = 16.4872(5) angstrom, c = 17.9224(6) angstrom and beta = 104.048(4)degrees. From the X-ray crystal structure, 2 is found to be a rare example of monohelical complex of Eu(III). Complex 1 is diamagnetic. The magnetic moment of 2 at room temperature is 3.32 BM. Comparing the FT-IR spectra of 1 and 2, it is concluded that 1 also is a mononuclear single helix. H-1 NMR reveals that both 1 and 2 are mixtures of two diastereomers. In the case of the La(III) complex (1), the diastereomeric excess is only 10% but in the Eu(III) complex 2 it is 80%. The occurrence of diastereomerism is explained by the chiralities of the helical motif and the type of pentakis chelates present in 1 and 2.
Resumo:
This paper presents a semi-synchronous pipeline scheme, here referred as single-pulse pipeline, to the problem of mapping pipelined circuits to a Field Programmable Gate Array (FPGA). Area and timing considerations are given for a general case and later applied to a systolic circuit as illustration. The single-pulse pipeline can manage asynchronous worst-case data completion and it is evaluated against two chosen asynchronous pipelining: a four-phase bundle-data pipeline and a doubly-latched asynchronous pipeline. The semi-synchronous pipeline proposal takes less FPGA area and operates faster than the two selected fully-asynchronous schemes for an FPGA case.
Resumo:
In situ precipitation measurements can extremely differ in space and time. Taking into account the limited spatial–temporal representativity and the uncertainty of a single station is important for validating mesoscale numerical model results as well as for interpreting remote sensing data. In situ precipitation data from a high resolution network in North-Eastern Germany are analysed to determine their temporal and spatial representativity. For the dry year 2003 precipitation amounts were available with 10 min resolution from 14 rain gauges distributed in an area of 25 km 25 km around the Meteorological Observatory Lindenberg (Richard-Aßmann Observatory). Our analysis reveals that short-term (up to 6 h) precipitation events dominate (94% of all events) and that the distribution is skewed with a high frequency of very low precipitation amounts. Long-lasting precipitation events are rare (6% of all precipitation events), but account for nearly 50% of the annual precipitation. The spatial representativity of a single-site measurement increases slightly for longer measurement intervals and the variability decreases. Hourly precipitation amounts are representative for an area of 11 km 11 km. Daily precipitation amounts appear to be reliable with an uncertainty factor of 3.3 for an area of 25 km 25 km, and weekly and monthly precipitation amounts have uncertainties of a factor of 2 and 1.4 when compared to 25 km 25 km mean values.
Resumo:
Uncertainty affects all aspects of the property market but one area where the impact of uncertainty is particularly significant is within feasibility analyses. Any development is impacted by differences between market conditions at the conception of the project and the market realities at the time of completion. The feasibility study needs to address the possible outcomes based on an understanding of the current market. This requires the appraiser to forecast the most likely outcome relating to the sale price of the completed development, the construction costs and the timing of both. It also requires the appraiser to understand the impact of finance on the project. All these issues are time sensitive and analysis needs to be undertaken to show the impact of time to the viability of the project. The future is uncertain and a full feasibility analysis should be able to model the upside and downside risk pertaining to a range of possible outcomes. Feasibility studies are extensively used in Italy to determine land value but they tend to be single point analysis based upon a single set of “likely” inputs. In this paper we look at the practical impact of uncertainty in variables using a simulation model (Crystal Ball ©) with an actual case study of an urban redevelopment plan for an Italian Municipality. This allows the appraiser to address the issues of uncertainty involved and thus provide the decision maker with a better understanding of the risk of development. This technique is then refined using a “two-dimensional technique” to distinguish between “uncertainty” and “variability” and thus create a more robust model.
Resumo:
Developing high-quality scientific research will be most effective if research communities with diverse skills and interests are able to share information and knowledge, are aware of the major challenges across disciplines, and can exploit economies of scale to provide robust answers and better inform policy. We evaluate opportunities and challenges facing the development of a more interactive research environment by developing an interdisciplinary synthesis of research on a single geographic region. We focus on the Amazon as it is of enormous regional and global environmental importance and faces a highly uncertain future. To take stock of existing knowledge and provide a framework for analysis we present a set of mini-reviews from fourteen different areas of research, encompassing taxonomy, biodiversity, biogeography, vegetation dynamics, landscape ecology, earth-atmosphere interactions, ecosystem processes, fire, deforestation dynamics, hydrology, hunting, conservation planning, livelihoods, and payments for ecosystem services. Each review highlights the current state of knowledge and identifies research priorities, including major challenges and opportunities. We show that while substantial progress is being made across many areas of scientific research, our understanding of specific issues is often dependent on knowledge from other disciplines. Accelerating the acquisition of reliable and contextualized knowledge about the fate of complex pristine and modified ecosystems is partly dependent on our ability to exploit economies of scale in shared resources and technical expertise, recognise and make explicit interconnections and feedbacks among sub-disciplines, increase the temporal and spatial scale of existing studies, and improve the dissemination of scientific findings to policy makers and society at large. Enhancing interaction among research efforts is vital if we are to make the most of limited funds and overcome the challenges posed by addressing large-scale interdisciplinary questions. Bringing together a diverse scientific community with a single geographic focus can help increase awareness of research questions both within and among disciplines, and reveal the opportunities that may exist for advancing acquisition of reliable knowledge. This approach could be useful for a variety of globally important scientific questions.
Resumo:
Wernicke’s aphasia (WA) is the classical neurological model of comprehension impairment and, as a result, the posterior temporal lobe is assumed to be critical to semantic cognition. This conclusion is potentially confused by (a) the existence of patient groups with semantic impairment following damage to other brain regions (semantic dementia and semantic aphasia) and (b) an ongoing debate about the underlying causes of comprehension impairment in WA. By directly comparing these three patient groups for the first time, we demonstrate that the comprehension impairment in Wernicke’s aphasia is best accounted for by dual deficits in acoustic-phonological analysis (associated with pSTG) and semantic cognition (associated with pMTG and angular gyrus). The WA group were impaired on both nonverbal and verbal comprehension assessments consistent with a generalised semantic impairment. This semantic deficit was most similar in nature to that of the semantic aphasia group suggestive of a disruption to semantic control processes. In addition, only the WA group showed a strong effect of input modality on comprehension, with accuracy decreasing considerably as acoustic-phonological requirements increased. These results deviate from traditional accounts which emphasise a single impairment and, instead, implicate two deficits underlying the comprehension disorder in WA.
Resumo:
Three Cu(II)-azido complexes of formula [Cu2L2(N-3)(2)] (1), [Cu2L2(N-3)(2)]center dot H2O (2) and [CuL(N-3)](n) (3) have been synthesized using the same tridentate Schiff base ligand HL (2-[(3-methylaminopropylimino)-methyl]-phenol), the condensation product of N-methyl-1,3-propanediamine and salicyldehyde). Compounds 1 and 2 are basal-apical mu-1,1 double azido bridged dimers. The dimeric structure of 1 is centro-symmetric but that of 2 is non-centrommetric. Compound 3 is a mu-1,1 single azido bridged 1D chain. The three complexes interconvert in solution and can be obtained in pure form by carefully controlling the synthetic conditions. Compound 2 undergoes an irreversible transformation to 1 upon dehydration in the solid state. The magnetic properties of compounds 1 and 2 show the presence of weak antiferromagnetic exchange interactions mediated by the double 1,1-N-3 azido bridges (J = -2.59(4) and -0.10(1) cm-(1), respectively). The single 1,1-N-3 bridge in compound 3 mediates a negligible exchange interaction.
Resumo:
Revealing the evolution of well-organized social behavior requires understanding a mechanism by which collective behavior is produced. A well-organized group may be produced by two possible mechanisms, namely, a central control and a distributed control. In the second case, local interactions between interchangeable components function at the bottom of the collective behavior. We focused on a simple behavior of an individual ant and analyzed the interactions between a pair of ants. In an experimental set-up, we placed the workers in a hemisphere without a nest, food, and a queen, and recorded their trajectories. The temporal pattern of velocity of each ant was obtained. From this bottom-up approach, we found the characteristic behavior of a single worker and a pair of workers as follows: (1) Activity of each individual has a rhythmic component. (2) Interactions between a pair of individuals result in two types of coupling, namely the anti-phase and the in-phase coupling. The direct physical contacts between the pair of workers might cause a phase shift of the rhythmic components in individual ants. We also build up a simple model based on the coupled oscillators toward the understanding of the whole colony behavior.
Resumo:
Interpersonal interaction in public goods contexts is very different in character to its depiction in economic theory, despite the fact that the standard model is based on a small number of apparently plausible assumptions. Approaches to the problem are reviewed both from within and outside economics. It is argued that quick fixes such as a taste for giving do not provide a way forward. An improved understanding of why people contribute to such goods seems to require a different picture of the relationships between individuals than obtains in standard microeconomic theory, where they are usually depicted as asocial. No single economic model at present is consistent with all the relevant field and laboratory data. It is argued that there are defensible ideas from outside the discipline which ought to be explored, relying on different conceptions of rationality and/or more radically social agents. Three such suggestions are considered, one concerning the expressive/communicative aspect of behaviour, a second the possibility of a part-whole relationship between interacting agents and the third a version of conformism.
Resumo:
This paper presents single-column model (SCM) simulations of a tropical squall-line case observed during the Coupled Ocean-Atmosphere Response Experiment of the Tropical Ocean/Global Atmosphere Programme. This case-study was part of an international model intercomparison project organized by Working Group 4 ‘Precipitating Convective Cloud Systems’ of the GEWEX (Global Energy and Water-cycle Experiment) Cloud System Study. Eight SCM groups using different deep-convection parametrizations participated in this project. The SCMs were forced by temperature and moisture tendencies that had been computed from a reference cloud-resolving model (CRM) simulation using open boundary conditions. The comparison of the SCM results with the reference CRM simulation provided insight into the ability of current convection and cloud schemes to represent organized convection. The CRM results enabled a detailed evaluation of the SCMs in terms of the thermodynamic structure and the convective mass flux of the system, the latter being closely related to the surface convective precipitation. It is shown that the SCMs could reproduce reasonably well the time evolution of the surface convective and stratiform precipitation, the convective mass flux, and the thermodynamic structure of the squall-line system. The thermodynamic structure simulated by the SCMs depended on how the models partitioned the precipitation between convective and stratiform. However, structural differences persisted in the thermodynamic profiles simulated by the SCMs and the CRM. These differences could be attributed to the fact that the total mass flux used to compute the SCM forcing differed from the convective mass flux. The SCMs could not adequately represent these organized mesoscale circulations and the microphysicallradiative forcing associated with the stratiform region. This issue is generally known as the ‘scale-interaction’ problem that can only be properly addressed in fully three-dimensional simulations. Sensitivity simulations run by several groups showed that the time evolution of the surface convective precipitation was considerably smoothed when the convective closure was based on convective available potential energy instead of moisture convergence. Finally, additional SCM simulations without using a convection parametrization indicated that the impact of a convection parametrization in forced SCM runs was more visible in the moisture profiles than in the temperature profiles because convective transport was particularly important in the moisture budget.
Resumo:
This paper for the first time discuss the wind pressure distribution on the building surface immersed in wind profile of low-level jet rather than a logarithmic boundary-layer profile. Two types of building models are considered, low-rise and high-rise building, relative to the low-level jet height. CFD simulation is carried out. The simulation results show that the wind pressure distribution immersed in a low-jet wine profile is very different from the typical uniform and boundary-layer flow. For the low-rise building, the stagnation point is located at the upper level of windward façade for the low-level jet wind case, and the separation zone above the roof top is not as obvious as the uniform case. For the high-rise building model, the height of stagnation point is almost as high as the low-level jet height.
Resumo:
We analyze ionospheric convection patterns over the polar regions during the passage of an interplanetary magnetic cloud on January 14, 1988, when the interplanetary magnetic field (IMF) rotated slowly in direction and had a large amplitude. Using the assimilative mapping of ionospheric electrodynamics (AMIE) procedure, we combine simultaneous observations of ionospheric drifts and magnetic perturbations from many different instruments into consistent patterns of high-latitude electrodynamics, focusing on the period of northward IMF. By combining satellite data with ground-based observations, we have generated one of the most comprehensive data sets yet assembled and used it to produce convection maps for both hemispheres. We present evidence that a lobe convection cell was embedded within normal merging convection during a period when the IMF By and Bz components were large and positive. As the IMF became predominantly northward, a strong reversed convection pattern (afternoon-to-morning potential drop of around 100 kV) appeared in the southern (summer) polar cap, while convection in the northern (winter) hemisphere became weak and disordered with a dawn-to-dusk potential drop of the order of 30 kV. These patterns persisted for about 3 hours, until the IMF rotated significantly toward the west. We interpret this behavior in terms of a recently proposed merging model for northward IMF under solstice conditions, for which lobe field lines from the hemisphere tilted toward the Sun (summer hemisphere) drape over the dayside magnetosphere, producing reverse convection in the summer hemisphere and impeding direct contact between the solar wind and field lines connected to the winter polar cap. The positive IMF Bx component present at this time could have contributed to the observed hemispheric asymmetry. Reverse convection in the summer hemisphere broke down rapidly after the ratio |By/Bz| exceeded unity, while convection in the winter hemisphere strengthened. A dominant dawn-to-dusk potential drop was established in both hemispheres when the magnitude of By exceeded that of Bz, with potential drops of the order of 100 kV, even while Bz remained northward. The later transition to southward Bz produced a gradual intensification of the convection, but a greater qualitative change occurred at the transition through |By/Bz| = 1 than at the transition through Bz = 0. The various convection patterns we derive under northward IMF conditions illustrate all possibilities previously discussed in the literature: nearly single-cell and multicell, distorted and symmetric, ordered and unordered, and sunward and antisunward.
Resumo:
Background Despite the promising benefits of adaptive designs (ADs), their routine use, especially in confirmatory trials, is lagging behind the prominence given to them in the statistical literature. Much of the previous research to understand barriers and potential facilitators to the use of ADs has been driven from a pharmaceutical drug development perspective, with little focus on trials in the public sector. In this paper, we explore key stakeholders’ experiences, perceptions and views on barriers and facilitators to the use of ADs in publicly funded confirmatory trials. Methods Semi-structured, in-depth interviews of key stakeholders in clinical trials research (CTU directors, funding board and panel members, statisticians, regulators, chief investigators, data monitoring committee members and health economists) were conducted through telephone or face-to-face sessions, predominantly in the UK. We purposively selected participants sequentially to optimise maximum variation in views and experiences. We employed the framework approach to analyse the qualitative data. Results We interviewed 27 participants. We found some of the perceived barriers to be: lack of knowledge and experience coupled with paucity of case studies, lack of applied training, degree of reluctance to use ADs, lack of bridge funding and time to support design work, lack of statistical expertise, some anxiety about the impact of early trial stopping on researchers’ employment contracts, lack of understanding of acceptable scope of ADs and when ADs are appropriate, and statistical and practical complexities. Reluctance to use ADs seemed to be influenced by: therapeutic area, unfamiliarity, concerns about their robustness in decision-making and acceptability of findings to change practice, perceived complexities and proposed type of AD, among others. Conclusions There are still considerable multifaceted, individual and organisational obstacles to be addressed to improve uptake, and successful implementation of ADs when appropriate. Nevertheless, inferred positive change in attitudes and receptiveness towards the appropriate use of ADs by public funders are supportive and are a stepping stone for the future utilisation of ADs by researchers.
Resumo:
This paper proposes a set of well defined steps to design functional verification monitors intended to verify Floating Point Units (FPU) described in HDL. The first step consists on defining the input and output domain coverage. Next, the corner cases are defined. Finally, an already verified reference model is used in order to test the correctness of the Device Under Verification (DUV). As a case study a monitor for an IEEE754-2008 compliant design is implemented. This monitor is built to be easily instantiated into verification frameworks such as OVM. Two different designs were verified reaching complete input coverage and successful compliant results.