956 resultados para Explicit guarantees
Resumo:
Do enterprise social network platforms in an organization make the company more innovative? In theory, through communication, collaboration, and knowledge exchange, innovation ideas can easily be expressed, shared, and discussed with many partners in the organization. Yet, whether this guarantees innovation success remains to be seen. The authors studied how innovation ideas moved--or not--from an enterprise social network platform to regular innovation processes at a large Australian retailer. They found that the success of innovation ideas depends on how easily understandable the idea is on the platform, how long it has been discussed, and how powerful the social network participants are in the organization. These findings inform management strategies for the governance of enterprise social network use and the organizational innovation process.
Resumo:
Farmland bird species have been declining in Europe. Many declines have coincided with general intensification of farming practices. In Finland, replacement of mixed farming, including rotational pastures, with specialized cultivation has been one of the most drastic changes from the 1960s to the 1990s. This kind of habitat deterioration limits the persistence of populations, as has been previously indicated from local populations. Integrated population monitoring, which gathers species-specific information of population size and demography, can be used to assess the response of a population to environment changes also at a large spatial scale. I targeted my analysis at the Finnish starling (Sturnus vulgaris). Starlings are common breeders in farmland habitats, but severe declines of local populations have been reported from Finland in the 1970s and 1980s and later from other parts of Europe. Habitat deterioration (replacement of pasture and grassland habitats with specialized cultivation areas) limits reproductive success of the species. I analysed regional population data in order to exemplify the importance of agricultural change to bird population dynamics. I used nestling ringing and nest-card data from 1951 to 2005 in order to quantify population trends and per capita reproductive success within several geographical regions (south/north and west/east aspects). I used matrix modelling, acknowledging age-specific survival and fecundity parameters and density-dependence, to model population dynamics. Finnish starlings declined by 80% from the end of the 1960s up to the end of the 1980s. The observed patterns and the model indicated that the population decline was due to the decline of the carrying capacity of farmland habitats. The decline was most severe in north Finland where populations largely become extinct. However, habitat deterioration was most severe in the southern breeding areas. The deteriorations in habitat quality decreased reproduction, which finally caused the decline. I suggest that poorly-productive northern populations have been partly maintained by immigration from the highly-productive southern populations. As the southern populations declined, ceasing emigration caused the population extinction in north. This phenomenon was explained with source sink population dynamics, which I structured and verified on the basis of a spatially explicit simulation model. I found that southern Finnish starling population exhibits ten-year cyclic regularity, a phenomenon that can be explained with delayed density-dependence in reproduction.
Resumo:
Particle filters find important applications in the problems of state and parameter estimations of dynamical systems of engineering interest. Since a typical filtering algorithm involves Monte Carlo simulations of the process equations, sample variance of the estimator is inversely proportional to the number of particles. The sample variance may be reduced if one uses a Rao-Blackwell marginalization of states and performs analytical computations as much as possible. In this work, we propose a semi-analytical particle filter, requiring no Rao-Blackwell marginalization, for state and parameter estimations of nonlinear dynamical systems with additively Gaussian process/observation noises. Through local linearizations of the nonlinear drift fields in the process/observation equations via explicit Ito-Taylor expansions, the given nonlinear system is transformed into an ensemble of locally linearized systems. Using the most recent observation, conditionally Gaussian posterior density functions of the linearized systems are analytically obtained through the Kalman filter. This information is further exploited within the particle filter algorithm for obtaining samples from the optimal posterior density of the states. The potential of the method in state/parameter estimations is demonstrated through numerical illustrations for a few nonlinear oscillators. The proposed filter is found to yield estimates with reduced sample variance and improved accuracy vis-a-vis results from a form of sequential importance sampling filter.
Resumo:
There has been a recent spate of high profile infrastructure cost overruns in Australia and internationally. This is just the tip of a longer-term and more deeply-seated problem with initial budget estimating practice, well recognised in both academic research and industry reviews: the problem of uncertainty. A case study of the Sydney Opera House is used to identify and illustrate the key causal factors and system dynamics of cost overruns. It is conventionally the role of risk management to deal with such uncertainty, but the type and extent of the uncertainty involved in complex projects is shown to render established risk management techniques ineffective. This paper considers a radical advance on current budget estimating practice which involves a particular approach to statistical modelling complemented by explicit training in estimating practice. The statistical modelling approach combines the probability management techniques of Savage, which operate on actual distributions of values rather than flawed representations of distributions, and the data pooling technique of Skitmore, where the size of the reference set is optimised. Estimating training employs particular calibration development methods pioneered by Hubbard, which reduce the bias of experts caused by over-confidence and improve the consistency of subjective decision-making. A new framework for initial budget estimating practice is developed based on the combined statistical and training methods, with each technique being explained and discussed.
Resumo:
This article deals with a simulation-based Study of the impact of projectiles on thin aluminium plates using LS-DYNA by modelling plates with shell elements and projectiles with solid elements. In order to establish the required modelling criterion in terms of element size for aluminium plates, a convergence Study of residual velocity has been carried Out by varying mesh density in the impact zone. Using the preferred material and meshing criteria arrived at here, extremely good prediction of test residual velocities and ballistic limits given by Gupta et al. (2001) for thin aluminium plates has been obtained. The simulation-based pattern of failure with localized bulging and jagged edge of perforation is similar to the perforation with petalling seen in tests. A number Of simulation-based parametric studies have been carried out and results consistent with published test data have been obtained. Despite the robust correlation achieved against published experimental results, it would be prudent to conduct one's own experiments, for a final correlation via the present modelling procedure and analysis with the explicit LS-DYNTA 970 solver. Hence, a sophisticated ballistic impact testing facility and a high-speed camera have been used to conduct additional tests on grade 1100 aluminium plates of 1 mm thickness with projectiles Of four different nose shapes. Finally, using the developed numerical simulation procedure, an excellent correlation of residual velocity and failure modes with the corresponding test results has been obtained.
Resumo:
The remarkable geological and evolutionary history of peninsular India has generated much interest in the patterns and processes that might have shaped the current distributions of its endemic biota. In this regard the Out of India hypothesis, which proposes that rafting peninsular India carried Gondwanan forms to Asia after the break-up of Gondwana super continent, has gained prominence. Here we have reviewed molecular studies undertaken on a range of taxa of supposedly Gondwanan origin to better understand the Out-of-India scenario. This re-evaluation of published molecular studies indicates that there is mounting evidence supporting Out-of-India scenario for various Asian taxa. Nevertheless, in many studies the evidence is inconclusive due to lack of information on the age of relevant nodes. Studies also indicate that not all Gondwanan forms of peninsular India dispersed out of India. Many of these ancient lineages are confined to peninsular India and therefore are relict Gondwanan lineages. Additionally for some taxa an Into India rather than Out-of-India scenario better explains their current distribution. To identify the Out-of-India component of Asian biota it is imperative that we understand the complex biogeographical history of India. To this end, we propose three oversimplified yet explicit phylogenetic predictions. These predictions can be tested through the use of molecular phylogenetic tools in conjunction with palaeontological and geological data.
Resumo:
We investigate the transition of a radiatively inefficient phase of a viscous two temperature accreting flow to a cooling dominated phase and vice versa around black holes. Based on a global sub-Keplerian accretion disk model in steady state, including explicit cooling processes self-consistently, we show that general advective accretion flow passes through various phases during its infall towards a black hole. Bremsstrahlung, synchrotron and inverse Comptonization of soft photons are considered as possible cooling mechanisms. Hence the flow governs a much lower electron temperature similar to 10(8) - 10(9.5) K compared to the hot protons of temperature similar to 10(10.2) - 10(11.8) K in the range of the accretion rate in Eddington units 0.01 less than or simiar to (M) over dot less than or similar to 100. Therefore, the solutions may potentially explain the hard X-rays and the gamma-rays emitted from AGNs and X-ray binaries. We finally compare the solutions for two different regimes of viscosity and conclude that a weakly viscous flow is expected to be cooling dominated compared to its highly viscous counterpart which is radiatively inefficient. The flow is successfully able to reproduce the observed minosities of the under-fed AGNs and quasars (e.g. Sgr A*), ultra-luminous X-ray sources (e.g. SS433), as well as the highly luminous AGNs and ultra-luminous quasars (e.g. PKS 0743-67) at different combinations of the mass accretion rate and ratio of specific heats.
Resumo:
In these lectures we plan to present a survey of certain aspects of harmonic analysis on a Heisenberg nilmanifold Gammakslash}H-n. Using Weil-Brezin-Zak transform we obtain an explicit decomposition of L-2 (Gammakslash}H-n) into irreducible subspaces invariant under the right regular representation of the Heisenberg group. We then study the Segal-Bargmann transform associated to the Laplacian on a nilmanifold and characterise the image of L-2 (GammakslashH-n) in terms of twisted Bergman and Hermite Bergman spaces.
Resumo:
We investigate viscous two-temperature accretion disc flows around rotating black holes. We describe the global solution of accretion flows with a sub-Keplerian angular momentum profile, by solving the underlying conservation equations including explicit cooling processes self-consistently. Bremsstrahlung, synchrotron and inverse Comptonization of soft photons are considered as possible cooling mechanisms. We focus on the set of solutions for sub-Eddington, Eddington and super-Eddington mass accretion rates around Schwarzschild and Kerr black holes with a Kerr parameter of 0.998. It is found that the flow, during its infall from the Keplerian to sub-Kepleria transition region to the black hole event horizon, passes through various phases of advection: the general advective paradigm to the radiatively inefficient phase, and vice versa. Hence, the flow governs a much lower electron temperature similar to 10(8)-10(9.5) K, in the range of accretion rate in Eddington units 0.01 less than or similar to (M) over dot less than or similar to 100, compared to the hot protons of temperature similar to 10(10.2)-10(11.8) K. Therefore, the solution may potentially explain the hard X-rays and gamma-rays emitted from active galactic nuclei (AGNs) and X-ray binaries. We then compare the solutions for two different regimes of viscosity. We conclude that a weakly viscous flow is expected to be cooling dominated, particularly at the inner region of the disc, compared to its highly viscous counterpart, which is radiatively inefficient. With all the solutions in hand, we finally reproduce the observed luminosities of the underfed AGNs and quasars (e. g. Sgr A*) to ultraluminous X-ray sources (e. g. SS433), at different combinations of input parameters, such as the mass accretion rate and the ratio of specific heats. The set of solutions also predicts appropriately the luminosity observed in highly luminous AGNs and ultraluminous quasars (e. g. PKS 0743-67).
Resumo:
A new form of a multi-step transversal linearization (MTL) method is developed and numerically explored in this study for a numeric-analytical integration of non-linear dynamical systems under deterministic excitations. As with other transversal linearization methods, the present version also requires that the linearized solution manifold transversally intersects the non-linear solution manifold at a chosen set of points or cross-section in the state space. However, a major point of departure of the present method is that it has the flexibility of treating non-linear damping and stiffness terms of the original system as damping and stiffness terms in the transversally linearized system, even though these linearized terms become explicit functions of time. From this perspective, the present development is closely related to the popular practice of tangent-space linearization adopted in finite element (FE) based solutions of non-linear problems in structural dynamics. The only difference is that the MTL method would require construction of transversal system matrices in lieu of the tangent system matrices needed within an FE framework. The resulting time-varying linearized system matrix is then treated as a Lie element using Magnus’ characterization [W. Magnus, On the exponential solution of differential equations for a linear operator, Commun. Pure Appl. Math., VII (1954) 649–673] and the associated fundamental solution matrix (FSM) is obtained through repeated Lie-bracket operations (or nested commutators). An advantage of this approach is that the underlying exponential transformation could preserve certain intrinsic structural properties of the solution of the non-linear problem. Yet another advantage of the transversal linearization lies in the non-unique representation of the linearized vector field – an aspect that has been specifically exploited in this study to enhance the spectral stability of the proposed family of methods and thus contain the temporal propagation of local errors. A simple analysis of the formal orders of accuracy is provided within a finite dimensional framework. Only a limited numerical exploration of the method is presently provided for a couple of popularly known non-linear oscillators, viz. a hardening Duffing oscillator, which has a non-linear stiffness term, and the van der Pol oscillator, which is self-excited and has a non-linear damping term.
Resumo:
Multielectrode neurophysiological recording and high-resolution neuroimaging generate multivariate data that are the basis for understanding the patterns of neural interactions. How to extract directions of information flow in brain networks from these data remains a key challenge. Research over the last few years has identified Granger causality as a statistically principled technique to furnish this capability. The estimation of Granger causality currently requires autoregressive modeling of neural data. Here, we propose a nonparametric approach based on widely used Fourier and wavelet transforms to estimate both pairwise and conditional measures of Granger causality, eliminating the need of explicit autoregressive data modeling. We demonstrate the effectiveness of this approach by applying it to synthetic data generated by network models with known connectivity and to local field potentials recorded from monkeys performing a sensorimotor task.
Resumo:
Combining the advanced techniques of optimal dynamic inversion and model-following neuro-adaptive control design, an innovative technique is presented to design an automatic drug administration strategy for effective treatment of chronic myelogenous leukemia (CML). A recently developed nonlinear mathematical model for cell dynamics is used to design the controller (medication dosage). First, a nominal controller is designed based on the principle of optimal dynamic inversion. This controller can treat the nominal model patients (patients who can be described by the mathematical model used here with the nominal parameter values) effectively. However, since the system parameters for a realistic model patient can be different from that of the nominal model patients, simulation studies for such patients indicate that the nominal controller is either inefficient or, worse, ineffective; i.e. the trajectory of the number of cancer cells either shows non-satisfactory transient behavior or it grows in an unstable manner. Hence, to make the drug dosage history more realistic and patient-specific, a model-following neuro-adaptive controller is augmented to the nominal controller. In this adaptive approach, a neural network trained online facilitates a new adaptive controller. The training process of the neural network is based on Lyapunov stability theory, which guarantees both stability of the cancer cell dynamics as well as boundedness of the network weights. From simulation studies, this adaptive control design approach is found to be very effective to treat the CML disease for realistic patients. Sufficient generality is retained in the mathematical developments so that the technique can be applied to other similar nonlinear control design problems as well.
Resumo:
In order to understand self-diffusion (D) of a charged, flexible, and porous nanoscopic molecule in water, we carry out very long, fully atomistic molecular dynamics simulation of PAMAM dendrimer up to eight generations in explicit salt water under varying pH. We find that while the radius of gyration (R-g) varies as N-1/3, the self-diffusion constant (D) scales, surprisingly, as N-alpha, with alpha=0.39 at high pH and 0.5 at neutral pH, indicating a dramatic breakdown of Stokes-Einstein relation for diffusion of charged nanoscopic molecules. The variation in D as a function of radius of gyration demonstrates the importance of treating water and ions explicitly in the diffusion process of a flexible nanoscopic molecule. In agreement with recent experiments, the self-diffusion constant increases with pH, revealing the importance of dielectric friction in the diffusion process. The shape of a dendrimer is found to fluctuate on a nanosecond time scale. We argue that this flexibility (and also the porosity) of the dendrimer may play an important role in determining the mean square displacement of the dendrimer and the breakdown of the Stokes-Einstein relation between diffusion constant and the radius.
Resumo:
A half-duplex constrained non-orthogonal cooperative multiple access (NCMA) protocol suitable for transmission of information from N users to a single destination in a wireless fading channel is proposed. Transmission in this protocol comprises of a broadcast phase and a cooperation phase. In the broadcast phase, each user takes turn broadcasting its data to all other users and the destination in an orthogonal fashion in time. In the cooperation phase, each user transmits a linear function of what it received from all other users as well as its own data. In contrast to the orthogonal extension of cooperative relay protocols to the cooperative multiple access channels wherein at any point of time, only one user is considered as a source and all the other users behave as relays and do not transmit their own data, the NCMA protocol relaxes the orthogonality built into the protocols and hence allows for a more spectrally efficient usage of resources. Code design criteria for achieving full diversity of N in the NCMA protocol is derived using pair wise error probability (PEP) analysis and it is shown that this can be achieved with a minimum total time duration of 2N - 1 channel uses. Explicit construction of full diversity codes is then provided for arbitrary number of users. Since the Maximum Likelihood decoding complexity grows exponentially with the number of users, the notion of g-group decodable codes is introduced for our setup and a set of necesary and sufficient conditions is also obtained.
Resumo:
Purpose – This paper aims to explore the potential contributions of social media in supporting tacit knowledge sharing, according to the physicians’ perspectives and experiences. Design/methodology/approach – Adopting a qualitative survey design, 24 physicians were interviewed. Purposive and snowball sampling were used to select the participants. Thematic analysis approach was used for data analysis. Findings – The study revealed five major themes and over 20 sub-themes as potential contributions of social media to tacit knowledge flow among physicians. The themes included socialising, practising, networking, storytelling and encountering. In addition, with the help of the literature and the supporting data, the study proposed a conceptual model that explains the potential contribution of social media to tacit knowledge sharing. Research limitations/implications – The study had both theoretical (the difficulty of distinguishing tacit and explicit knowledge in practice) and practical limitations (small sample size). The study findings have implications for the healthcare industry whose clinical teams are not always physically co-located but must exchange their critical experiential and tacit knowledge. Originality/value – The study has opened up a new discussion of this area by demonstrating and conceptualising how social media tools may facilitate tacit knowledge sharing.