941 resultados para local productive systems


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Particle flow patterns were investigated for wet granulation and dry powder mixing in ploughshare mixers using Positron Emission Particle Tracking (PEPT). In a 4-1 mixer, calcium carbonate with mean size 45 mum was granulated using a 50 wt.% solution of glycerol and water as binding fluid, and particle movement was followed using a 600-mum calcium hydroxy-phosphate tracer particle. In a 20-1 mixer, dry powder flow was studied using a 600-mum resin bead tracer particle to simulate the bulk polypropylene powder with mean size 600 mum. Important differences were seen between particle flow patterns for wet and dry systems. Particle speed relative to blade speed was lower in the wet system than in the dry system, with the ratios of average particle speed to blade tip speed for all experiments in the range 0.01-015. In the axial plane, the same particle motion was observed around each blade; this provides a significant advance for modelling flow in ploughshare mixers. For the future, a detailed understanding of the local velocity, acceleration and density variations around a plough blade will reveal the effects of flow patterns in granulating systems on the resultant distribution of granular product attributes such as size, density and strength. (C) 2002 Elsevier Science B.V All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The placement of monocular laser lesions in the adult cat retina produces a lesion projection zone (LPZ) in primary visual cortex (V1) in which the majority of neurons have a normally located receptive field (RF) for stimulation of the intact eye and an ectopically located RF ( displaced to intact retina at the edge of the lesion) for stimulation of the lesioned eye. Animals that had such lesions for 14 - 85 d were studied under halothane and nitrous oxide anesthesia with conventional neurophysiological recording techniques and stimulation of moving light bars. Previous work suggested that a candidate source of input, which could account for the development of the ectopic RFs, was long-range horizontal connections within V1. The critical contribution of such input was examined by placing a pipette containing the neurotoxin kainic acid at a site in the normal V1 visual representation that overlapped with the ectopic RF recorded at a site within the LPZ. Continuation of well defined responses to stimulation of the intact eye served as a control against direct effects of the kainic acid at the LPZ recording site. In six of seven cases examined, kainic acid deactivation of neurons at the injection site blocked responsiveness to lesioned-eye stimulation at the ectopic RF for the LPZ recording site. We therefore conclude that long-range horizontal projections contribute to the dominant input underlying the capacity for retinal lesion-induced plasticity in V1.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reptiles change heart rate and blood flow patterns in response to heating and cooling, thereby decreasing the behavioural cost of thermoregulation. We tested the hypothesis that locally produced vasoactive substances, nitric oxide and prostaglandins, mediate the cardiovascular response of reptiles to heat. Heart rate and blood pressure were measured in eight crocodiles (Crocodylus porosus) during heating and cooling and while sequentially inhibiting nitric-oxide synthase and cyclooxygenase enzymes. Heart rate and blood pressure were significantly higher during heating than during cooling in all treatments. Power spectral density of heart rate and blood pressure increased significantly during heating and cooling compared to the preceding period of thermal equilibrium. Spectral density of heart rate in the high frequency band (0.19-0.70 Hz) was significantly greater during cooling in the saline treatment compared to when nitric-oxide synthase and cyclooxygenase enzymes were inhibited. Cross spectral analysis showed that changes in blood pressure preceded heart rate changes at low frequencies (

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When can a quantum system of finite dimension be used to simulate another quantum system of finite dimension? What restricts the capacity of one system to simulate another? In this paper we complete the program of studying what simulations can be done with entangling many-qudit Hamiltonians and local unitary control. By entangling we mean that every qudit is coupled to every other qudit, at least indirectly. We demonstrate that the only class of finite-dimensional entangling Hamiltonians that are not universal for simulation is the class of entangling Hamiltonians on qubits whose Pauli operator expansion contains only terms coupling an odd number of systems, as identified by Bremner [Phys. Rev. A 69, 012313 (2004)]. We show that in all other cases entangling many-qudit Hamiltonians are universal for simulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The diagrammatic strong-coupling perturbation theory (SCPT) for correlated electron systems is developed for intersite Coulomb interaction and for a nonorthogonal basis set. The construction is based on iterations of exact closed equations for many - electron Green functions (GFs) for Hubbard operators in terms of functional derivatives with respect to external sources. The graphs, which do not contain the contributions from the fluctuations of the local population numbers of the ion states, play a special role: a one-to-one correspondence is found between the subset of such graphs for the many - electron GFs and the complete set of Feynman graphs of weak-coupling perturbation theory (WCPT) for single-electron GFs. This fact is used for formulation of the approximation of renormalized Fermions (ARF) in which the many-electron quasi-particles behave analogously to normal Fermions. Then, by analyzing: (a) Sham's equation, which connects the self-energy and the exchange- correlation potential in density functional theory (DFT); and (b) the Galitskii and Migdal expressions for the total energy, written within WCPT and within ARF SCPT, a way we suggest a method to improve the description of the systems with correlated electrons within the local density approximation (LDA) to DFT. The formulation, in terms of renormalized Fermions LIDA (RF LDA), is obtained by introducing the spectral weights of the many electron GFs into the definitions of the charge density, the overlap matrices, effective mixing and hopping matrix elements, into existing electronic structure codes, whereas the weights themselves have to be found from an additional set of equations. Compared with LDA+U and self-interaction correction (SIC) methods, RF LDA has the advantage of taking into account the transfer of spectral weights, and, when formulated in terms of GFs, also allows for consideration of excitations and nonzero temperature. Going beyond the ARF SCPT, as well as RF LIDA, and taking into account the fluctuations of ion population numbers would require writing completely new codes for ab initio calculations. The application of RF LDA for ab initio band structure calculations for rare earth metals is presented in part 11 of this study (this issue). (c) 2005 Wiley Periodicals, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visual acuity is limited by the size and density of the smallest retinal ganglion cells, which correspond to the midget ganglion cells in primate retina and the beta- ganglion cells in cat retina, both of which have concentric receptive fields that respond at either light- On or light- Off. In contrast, the smallest ganglion cells in the rabbit retina are the local edge detectors ( LEDs), which respond to spot illumination at both light- On and light- Off. However, the LEDs do not predominate in the rabbit retina and the question arises, what role do they play in fine spatial vision? We studied the morphology and physiology of LEDs in the isolated rabbit retina and examined how their response properties are shaped by the excitatory and inhibitory inputs. Although the LEDs comprise only similar to 15% of the ganglion cells, neighboring LEDs are separated by 30 - 40 mu m on the visual streak, which is sufficient to account for the grating acuity of the rabbit. The spatial and temporal receptive- field properties of LEDs are generated by distinct inhibitory mechanisms. The strong inhibitory surround acts presynaptically to suppress both the excitation and the inhibition elicited by center stimulation. The temporal properties, characterized by sluggish onset, sustained firing, and low bandwidth, are mediated by the temporal properties of the bipolar cells and by postsynaptic interactions between the excitatory and inhibitory inputs. We propose that the LEDs signal fine spatial detail during visual fixation, when high temporal frequencies are minimal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Whilst traditional optimisation techniques based on mathematical programming techniques are in common use, they suffer from their inability to explore the complexity of decision problems addressed using agricultural system models. In these models, the full decision space is usually very large while the solution space is characterized by many local optima. Methods to search such large decision spaces rely on effective sampling of the problem domain. Nevertheless, problem reduction based on insight into agronomic relations and farming practice is necessary to safeguard computational feasibility. Here, we present a global search approach based on an Evolutionary Algorithm (EA). We introduce a multi-objective evaluation technique within this EA framework, linking the optimisation procedure to the APSIM cropping systems model. The approach addresses the issue of system management when faced with a trade-off between economic and ecological consequences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although the current level of organic production in industrialised countries amounts to little more than 1-2 percent, it is recognised that one of the major issues shaping agricultural output over the next several decades will be the demand for organic produce (Dixon et al. 2001). In Australia, the issues of healthy food and environmental concern contribute to increasing demand and market volumes for organic produce. However, in Indonesia, using more economical inputs for organic production is a supply-side factor driving organic production. For individual growers and processors, conversion from conventional to organic agriculture is often a challenging step, entailing a thorough revision of established practices and heightened market insecurity. This paper examines the potential for a systems approach to the analysis of the conversion process, to yield insights for household and community decisions. A framework for applying farming systems research to investigate the benefits of organic production in both Australia and Indonesia is discussed. The framework incorporates scope for farmer participation, crucial to the understanding of farming systems; analysis of production; and relationships to resources, technologies, markets, services, policies and institutions in their local cultural context. A systems approach offers the potential to internalise the external effects that may be constraining decisions to convert to organic production, and for the design of decision-making tools to assist households and the community. Systems models can guide policy design and serve as a mechanism for predicting the impact of changes to the policy and market environments. The increasing emphasis of farming systems research on community and environment in recent years is in keeping with the proposed application to organic production, processing and marketing issues. The approach will also facilitate the analysis of critical aspects of the Australian production, marketing and policy environment, and the investigation of these same features in an Indonesian context.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The physical implementation of quantum information processing is one of the major challenges of current research. In the last few years, several theoretical proposals and experimental demonstrations on a small number of qubits have been carried out, but a quantum computing architecture that is straightforwardly scalable, universal, and realizable with state-of-the-art technology is still lacking. In particular, a major ultimate objective is the construction of quantum simulators, yielding massively increased computational power in simulating quantum systems. Here we investigate promising routes towards the actual realization of a quantum computer, based on spin systems. The first one employs molecular nanomagnets with a doublet ground state to encode each qubit and exploits the wide chemical tunability of these systems to obtain the proper topology of inter-qubit interactions. Indeed, recent advances in coordination chemistry allow us to arrange these qubits in chains, with tailored interactions mediated by magnetic linkers. These act as switches of the effective qubit-qubit coupling, thus enabling the implementation of one- and two-qubit gates. Molecular qubits can be controlled either by uniform magnetic pulses, either by local electric fields. We introduce here two different schemes for quantum information processing with either global or local control of the inter-qubit interaction and demonstrate the high performance of these platforms by simulating the system time evolution with state-of-the-art parameters. The second architecture we propose is based on a hybrid spin-photon qubit encoding, which exploits the best characteristic of photons, whose mobility is exploited to efficiently establish long-range entanglement, and spin systems, which ensure long coherence times. The setup consists of spin ensembles coherently coupled to single photons within superconducting coplanar waveguide resonators. The tunability of the resonators frequency is exploited as the only manipulation tool to implement a universal set of quantum gates, by bringing the photons into/out of resonance with the spin transition. The time evolution of the system subject to the pulse sequence used to implement complex quantum algorithms has been simulated by numerically integrating the master equation for the system density matrix, thus including the harmful effects of decoherence. Finally a scheme to overcome the leakage of information due to inhomogeneous broadening of the spin ensemble is pointed out. Both the proposed setups are based on state-of-the-art technological achievements. By extensive numerical experiments we show that their performance is remarkably good, even for the implementation of long sequences of gates used to simulate interesting physical models. Therefore, the here examined systems are really promising buildingblocks of future scalable architectures and can be used for proof-of-principle experiments of quantum information processing and quantum simulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper applies Latour’s 1992 translation map as a device to explore the development of and recent conflict between two data standards for the exchange of business information – EDIFACT and XBRL. Our research is focussed in France, where EDIFACT is well established and XBRL is just emerging. The alliances supporting both standards are local and global. The French/European EDIFACT is promulgated through the United Nations while a consortium of national jurisdictions and companies has coalesced around the US initiated XBRL International (XII). We suggest cultural differences pose a barrier to co-operation between the two networks. Competing data standards create the risk of switching costs. The different technical characteristics of the standards are identified as raising implications for regulators and users. A key concern is the lack of co-ordination of data standard production and the mechanisms regulatory agencies use to choose platforms for electronic data submission.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Implementation studies and related research in organizational theory can be enhanced by drawing on the field of complex systems to understand better and, as a consequence, more successfully manage change. This article reinterprets data previously published in the British Journal of Management to reveal a new contribution, that policy implementation processes should be understood as a self-organizing system in which adaptive abilities are extremely important for stakeholders. In other words, national policy is reinterpreted at the local level, with each local organization uniquely mixing elements of national policy with their own requirements making policy implementation unpredictable and more sketchy. The original article explained different paces and directions of change in terms of traditional management processes: leadership, politics, implementation and vision. By reinterpreting the data, it is possible to reveal that deeper level processes, which are more emergent, are also at work influencing change, which the authors label possibility space. Implications for theory, policy and practice are identified.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper makes a case for taking a systems view of knowledge management within health-care provision, concentrating on the emergency care process in the UK National Health Service. It draws upon research in two casestudy organizations (a hospital and an ambulance service). The case-study organizations appear to be approaching knowledge (and information) management in a somewhat fragmented way. They are trying to think more holistically, but (perhaps) because of the ways their organizations and their work are structured, they cannot ‘see’ the whole of the care process. The paper explores the complexity of knowledge management in emergency health care and draws the distinction for knowledge management between managing local and operational knowledge, and global and clinical knowledge.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thrust of the argument presented in this chapter is that inter-municipal cooperation (IMC) in the United Kingdom reflects local government's constitutional position and its exposure to the exigencies of Westminster (elected central government) and Whitehall (centre of the professional civil service that services central government). For the most part councils are without general powers of competence and are restricted in what they can do by Parliament. This suggests that the capacity for locally driven IMC is restricted and operates principally within a framework constructed by central government's policy objectives and legislation and the political expediencies of the governing political party. In practice, however, recent examples of IMC demonstrate that the practices are more complex than this initial analysis suggests. Central government may exert top-down pressures and impose hierarchical directives, but there are important countervailing forces. Constitutional changes in Scotland and Wales have shifted the locus of central- local relations away from Westminster and Whitehall. In England, the seeding of English government regional offices in 1994 has evolved into an important structural arrangement that encourages councils to work together. Within the local government community there is now widespread acknowledgement that to achieve the ambitious targets set by central government, councils are, by necessity, bound to cooperate and work with other agencies. In recent years, the fragmentation of public service delivery has affected the scope of IMC. Elected local government in the UK is now only one piece of a complex jigsaw of agencies that provides services to the public; whether it is with non-elected bodies, such as health authorities, public protection authorities (police and fire), voluntary nonprofit organisations or for-profit bodies, councils are expected to cooperate widely with agencies in their localities. Indeed, for projects such as regeneration and community renewal, councils may act as the coordinating agency but the success of such projects is measured by collaboration and partnership working (Davies 2002). To place these developments in context, IMC is an example of how, in spite of the fragmentation of traditional forms of government, councils work with other public service agencies and other councils through the medium of interagency partnerships, collaboration between organisations and a mixed economy of service providers. Such an analysis suggests that, following changes to the system of local government, contemporary forms of IMC are less dependent on vertical arrangements (top-down direction from central government) as they are replaced by horizontal modes (expansion of networks and partnership arrangements). Evidence suggests, however that central government continues to steer local authorities through the agency of inspectorates and regulatory bodies, and through policy initiatives, such as local strategic partnerships and local area agreements (Kelly 2006), thus questioning whether, in the case of UK local government, the shift from hierarchy to network and market solutions is less differentiated and transformation less complete than some literature suggests. Vertical or horizontal pressures may promote IMC, yet similar drivers may deter collaboration between local authorities. An example of negative vertical pressure was central government's change of the systems of local taxation during the 1980s. The new taxation regime replaced a tax on property with a tax on individual residency. Although the community charge lasted only a few years, it was a highpoint of the then Conservative government policy that encouraged councils to compete with each other on the basis of the level of local taxation. In practice, however, the complexity of local government funding in the UK rendered worthless any meaningful ambition of councils competing with each other, especially as central government granting to local authorities is predicated (however imperfectly) on at least notional equalisation between those areas with lower tax yields and the more prosperous locations. Horizontal pressures comprise factors such as planning decisions. Over the last quarter century, councils have competed on the granting of permission to out-of-town retail and leisure complexes, now recognised as detrimental to neighbouring authorities because economic forces prevail and local, independent shops are unable to compete with multiple companies. These examples illustrate tensions at the core of the UK polity of whether IMC is feasible when competition between local authorities heightened by local differences reduces opportunities for collaboration. An alternative perspective on IMC is to explore whether specific purposes or functions promote or restrict it. Whether in the principle areas of local government responsibilities relating to social welfare, development and maintenance of the local infrastructure or environmental matters, there are examples of IMC. But opportunities have diminished considerably as councils lost responsibility for services provision as a result of privatisation and transfer of powers to new government agencies or to central government. Over the last twenty years councils have lost their role in the provision of further-or higher-education, public transport and water/sewage. Councils have commissioning power but only a limited presence in providing housing needs, social care and waste management. In other words, as a result of central government policy, there are, in practice, currently far fewer opportunities for councils to cooperate. Since 1997, the New Labour government has promoted IMC through vertical drivers and the development; the operation of these policy initiatives is discussed following the framework of the editors. Current examples of IMC are notable for being driven by higher tiers of government, working with subordinate authorities in principal-agent relations. Collaboration between local authorities and intra-interand cross-sectoral partnerships are initiated by central government. In other words, IMC is shaped by hierarchical drivers from higher levels of government but, in practice, is locally varied and determined less by formula than by necessity and function. © 2007 Springer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents an investigation, of synchronisation and causality, motivated by problems in computational neuroscience. The thesis addresses both theoretical and practical signal processing issues regarding the estimation of interdependence from a set of multivariate data generated by a complex underlying dynamical system. This topic is driven by a series of problems in neuroscience, which represents the principal background motive behind the material in this work. The underlying system is the human brain and the generative process of the data is based on modern electromagnetic neuroimaging methods . In this thesis, the underlying functional of the brain mechanisms are derived from the recent mathematical formalism of dynamical systems in complex networks. This is justified principally on the grounds of the complex hierarchical and multiscale nature of the brain and it offers new methods of analysis to model its emergent phenomena. A fundamental approach to study the neural activity is to investigate the connectivity pattern developed by the brain’s complex network. Three types of connectivity are important to study: 1) anatomical connectivity refering to the physical links forming the topology of the brain network; 2) effective connectivity concerning with the way the neural elements communicate with each other using the brain’s anatomical structure, through phenomena of synchronisation and information transfer; 3) functional connectivity, presenting an epistemic concept which alludes to the interdependence between data measured from the brain network. The main contribution of this thesis is to present, apply and discuss novel algorithms of functional connectivities, which are designed to extract different specific aspects of interaction between the underlying generators of the data. Firstly, a univariate statistic is developed to allow for indirect assessment of synchronisation in the local network from a single time series. This approach is useful in inferring the coupling as in a local cortical area as observed by a single measurement electrode. Secondly, different existing methods of phase synchronisation are considered from the perspective of experimental data analysis and inference of coupling from observed data. These methods are designed to address the estimation of medium to long range connectivity and their differences are particularly relevant in the context of volume conduction, that is known to produce spurious detections of connectivity. Finally, an asymmetric temporal metric is introduced in order to detect the direction of the coupling between different regions of the brain. The method developed in this thesis is based on a machine learning extensions of the well known concept of Granger causality. The thesis discussion is developed alongside examples of synthetic and experimental real data. The synthetic data are simulations of complex dynamical systems with the intention to mimic the behaviour of simple cortical neural assemblies. They are helpful to test the techniques developed in this thesis. The real datasets are provided to illustrate the problem of brain connectivity in the case of important neurological disorders such as Epilepsy and Parkinson’s disease. The methods of functional connectivity in this thesis are applied to intracranial EEG recordings in order to extract features, which characterize underlying spatiotemporal dynamics before during and after an epileptic seizure and predict seizure location and onset prior to conventional electrographic signs. The methodology is also applied to a MEG dataset containing healthy, Parkinson’s and dementia subjects with the scope of distinguishing patterns of pathological from physiological connectivity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variant of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here two new extended frameworks are derived and presented that are based on basis function expansions and local polynomial approximations of a recently proposed variational Bayesian algorithm. It is shown that the new extensions converge to the original variational algorithm and can be used for state estimation (smoothing). However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new methods are numerically validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein-Uhlenbeck process, for which the exact likelihood can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz '63 (3-dimensional model). The algorithms are also applied to the 40 dimensional stochastic Lorenz '96 system. In this investigation these new approaches are compared with a variety of other well known methods such as the ensemble Kalman filter / smoother, a hybrid Monte Carlo sampler, the dual unscented Kalman filter (for jointly estimating the systems states and model parameters) and full weak-constraint 4D-Var. Empirical analysis of their asymptotic behaviour as a function of observation density or length of time window increases is provided.