45 resultados para Just in Time
Resumo:
Flexible Assembly Systems (FASs) are normally associated with the automatic, or robotic, assembly of products, supported by automated material handling systems. However, manual assembly operations are still prevalent within many industries, where the complexity and variety of products prohibit the development of suitable automated assembly equipment. This article presents a generic model for incorporating flexibility into the design and control of assembly operations concerned with high variety/low volume manufacture, drawing on the principles for Flexible Manufacturing Systems (FMS) and Just-in-Time (JIT) delivery. It is based on work being undertaken in an electronics company where the assembly operations have been overhauled and restructured in response to a need for greater flexibility, shorter cycle times and reduced inventory levels. The principles employed are in themselves not original. However, the way they have been combined and tailored has created a total manufacturing control system which represents a new concept for responding to demands placed on market driven firms operating in an uncertain environment.
Resumo:
A simulation model has been constructed of a valve manufacturing plant with the aim of assessing capacity requirements in response to a forecast increase in demand. The plant provides a weekly cycle of valves of varying types, based on a yearly production plan. Production control is provided by a just-in-time type system to minimise inventory. The simulation model investigates the effect on production lead time of a range of valve sequences into the plant. The study required the collection of information from a variety of sources, and a model that reflected the true capabilities of the production system. The simulation results convinced management that substantial changes were needed in order to meet demand. The case highlights the use of simulation in enabling a manager to quantify operational scenarios and thus provide a rational basis on which to take decisions on meeting performance criteria.
Resumo:
A range of physical and engineering systems exhibit an irregular complex dynamics featuring alternation of quiet and burst time intervals called the intermittency. The intermittent dynamics most popular in laser science is the on-off intermittency [1]. The on-off intermittency can be understood as a conversion of the noise in a system close to an instability threshold into effective time-dependent fluctuations which result in the alternation of stable and unstable periods. The on-off intermittency has been recently demonstrated in semiconductor, Erbium doped and Raman lasers [2-5]. Recently demonstrated random distributed feedback (random DFB) fiber laser has an irregular dynamics near the generation threshold [6,7]. Here we show the intermittency in the cascaded random DFB fiber laser. We study intensity fluctuations in a random DFB fiber laser based on nitrogen doped fiber. The laser generates first and second Stokes components 1120 nm and 1180 nm respectively under an appropriate pumping. We study the intermittency in the radiation of the second Stokes wave. The typical time trace near the generation threshold of the second Stokes wave (Pth) is shown at Fig. 1a. From the number of long enough time-traces we calculate statistical distribution between major spikes in time dynamics, Fig. 1b. To eliminate contribution of high frequency components of spikes we use a low pass filter along with the reference value of the output power. Experimental data is fitted by power law,
Resumo:
A cross-sectional study aims to describe the overall picture of a phenomenon, a situational problem, an attitude or an issue, by asking a cross-section of a given population at one specified moment in time. This paper describes the key features of the cross-sectional survey method. It begins by highlighting the main principles of the method, then discusses stages in the research process, drawing on two surveys of primary care pharmacists to illustrate some salient points about planning, sampling frames, definition and conceptual issues, research instrument design and response rates. Four constraints in prescribing studies were noted. First the newness of the subject meant a low basis of existing knowledge to design a questionnaire. Second, there was no public existing database for the sampling frame, so a pragmatic sampling exercise was used. Third, the definition of a Primary Care Pharmacist (PCP) [in full] and respondents recognition of that name and identification with the new role limited the response. Fourth, a growing problem for all surveys, but particularly with pharmacists and general practitioners (GP) [in full] is the growing danger of survey fatigue, which has a negative impact on response levels.
Resumo:
Two experiments examined the extent to which attitudes changed following majority and minority influence are resistant to counter-persuasion. In both experiments participants' attitudes were measured after being exposed to two messages, delayed in time, which argued opposite positions (initial message and counter-message). In the first experiment, attitudes following minority endorsement of the initial message were more resistant to a second counter-message only when the initial message contained strong versus weak arguments. Attitudes changed following majority influence did not resist the second counter-message and returned to their pre-test level. Experiment 2 varied whether memory was warned (i.e., message recipients expected to recall the message) or not, to manipulate message processing. When memory was warned, which should increase message processing, attitudes changed following both majority and minority influence resisted the second counter-message. The results support the view that minority influence instigates systematic processing of its arguments, leading to attitudes that resist counter-persuasion. Attitudes formed following majority influence yield to counter-persuasion unless there is a secondary task that encourages message processing.
Resumo:
This paper uses a meta-Malmquist index for measuring productivity change of the water industry in England and Wales and compares this to the traditional Malmquist index. The meta-Malmquist index computes productivity change with reference to a meta-frontier, it is computationally simpler and it is circular. The analysis covers all 22 UK water companies in existence in 2007, using data over the period 1993–2007. We focus on operating expenditure in line with assessments in this field, which treat operating and capital expenditure as lacking substitutability. We find important improvements in productivity between 1993 and 2005, most of which were due to frontier shifts rather than catch up to the frontier by companies. After 2005, the productivity shows a declining trend. We further use the meta-Malmquist index to compare the productivities of companies at the same and at different points in time. This shows some interesting results relating to the productivity of each company relative to that of other companies over time, and also how the performance of each company relative to itself over 1993–2007 has evolved. The paper is grounded in the broad theory of methods for measuring productivity change, and more specifically on the use of circular Malmquist indices for that purpose. In this context, the contribution of the paper is methodological and applied. From the methodology perspective, the paper demonstrates the use of circular meta-Malmquist indices in a comparative context not only across companies but also within company across time. This type of within-company assessment using Malmquist indices has not been applied extensively and to the authors’ knowledge not to the UK water industry. From the application perspective, the paper throws light on the performance of UK water companies and assesses the potential impact of regulation on their performance. In this context, it updates the relevant literature using more recent data.
Resumo:
National meteorological offices are largely concerned with synoptic-scale forecasting where weather predictions are produced for a whole country for 24 hours ahead. In practice, many local organisations (such as emergency services, construction industries, forestry, farming, and sports) require only local short-term, bespoke, weather predictions and warnings. This thesis shows that the less-demanding requirements do not require exceptional computing power and can be met by a modern, desk-top system which monitors site-specific ground conditions (such as temperature, pressure, wind speed and direction, etc) augmented with above ground information from satellite images to produce `nowcasts'. The emphasis in this thesis has been towards the design of such a real-time system for nowcasting. Local site-specific conditions are monitored using a custom-built, stand alone, Motorola 6809 based sub-system. Above ground information is received from the METEOSAT 4 geo-stationary satellite using a sub-system based on a commercially available equipment. The information is ephemeral and must be captured in real-time. The real-time nowcasting system for localised weather handles the data as a transparent task using the limited capabilities of the PC system. Ground data produces a time series of measurements at a specific location which represents the past-to-present atmospheric conditions of the particular site from which much information can be extracted. The novel approach adopted in this thesis is one of constructing stochastic models based on the AutoRegressive Integrated Moving Average (ARIMA) technique. The satellite images contain features (such as cloud formations) which evolve dynamically and may be subject to movement, growth, distortion, bifurcation, superposition, or elimination between images. The process of extracting a weather feature, following its motion and predicting its future evolution involves algorithms for normalisation, partitioning, filtering, image enhancement, and correlation of multi-dimensional signals in different domains. To limit the processing requirements, the analysis in this thesis concentrates on an `area of interest'. By this rationale, only a small fraction of the total image needs to be processed, leading to a major saving in time. The thesis also proposes an extention to an existing manual cloud classification technique for its implementation in automatically classifying a cloud feature over the `area of interest' for nowcasting using the multi-dimensional signals.
Resumo:
Background - It is well established that the left inferior frontal gyrus plays a key role in the cerebral cortical network that supports reading and visual word recognition. Less clear is when in time this contribution begins. We used magnetoencephalography (MEG), which has both good spatial and excellent temporal resolution, to address this question. Methodology/Principal Findings - MEG data were recorded during a passive viewing paradigm, chosen to emphasize the stimulus-driven component of the cortical response, in which right-handed participants were presented words, consonant strings, and unfamiliar faces to central vision. Time-frequency analyses showed a left-lateralized inferior frontal gyrus (pars opercularis) response to words between 100–250 ms in the beta frequency band that was significantly stronger than the response to consonant strings or faces. The left inferior frontal gyrus response to words peaked at ~130 ms. This response was significantly later in time than the left middle occipital gyrus, which peaked at ~115 ms, but not significantly different from the peak response in the left mid fusiform gyrus, which peaked at ~140 ms, at a location coincident with the fMRI–defined visual word form area (VWFA). Significant responses were also detected to words in other parts of the reading network, including the anterior middle temporal gyrus, the left posterior middle temporal gyrus, the angular and supramarginal gyri, and the left superior temporal gyrus. Conclusions/Significance - These findings suggest very early interactions between the vision and language domains during visual word recognition, with speech motor areas being activated at the same time as the orthographic word-form is being resolved within the fusiform gyrus. This challenges the conventional view of a temporally serial processing sequence for visual word recognition in which letter forms are initially decoded, interact with their phonological and semantic representations, and only then gain access to a speech code.
Resumo:
This research sets out to assess if the PHC system in rural Nigeria is effective by testing the research hypothesis: `PHC can be effective if and only if the Health Care Delivery System matches the attitudes and expectations of the Community'. The field surveys to accomplish this task were carried out in IBO, YORUBA, and HAUSA rural communities. A variety of techniques have been used as Research Methodology and these include questionnaires, interviews and personal observations of events in the rural community. This thesis embraces three main parts. Part I traces the socio-cultural aspects of PHC in rural Nigeria, describes PHC management activities in Nigeria and the practical problems inherent in the system. Part II describes various theoretical and practical research techniques used for the study and concentrates on the field work programme, data analysis and the research hypothesis-testing. Part III focusses on general strategies to improve PHC system in Nigeria to make it more effective. The research contributions to knowledge and the summary of main conclusions of the study are highlighted in this part also. Based on testing and exploring the research hypothesis as stated above, some conclusions have been arrived at, which suggested that PHC in rural Nigeria is ineffective as revealed in people's low opinions of the system and dissatisfaction with PHC services. Many people had expressed the view that they could not obtain health care services in time, at a cost they could afford and in a manner acceptable to them. Following the conclusions, some alternative ways to implement PHC programmes in rural Nigeria have been put forward to improve and make the Nigerian PHC system more effective.
Resumo:
Task classification is introduced as a method for the evaluation of monitoring behaviour in different task situations. On the basis of an analysis of different monitoring tasks, a task classification system comprising four task 'dimensions' is proposed. The perceptual speed and flexibility of closure categories, which are identified with signal discrimination type, comprise the principal dimension in this taxonomy, the others being sense modality, the time course of events, and source complexity. It is also proposed that decision theory provides the most complete method for the analysis of performance in monitoring tasks. Several different aspects of decision theory in relation to monitoring behaviour are described. A method is also outlined whereby both accuracy and latency measures of performance may be analysed within the same decision theory framework. Eight experiments and an organizational study are reported. The results show that a distinction can be made between the perceptual efficiency (sensitivity) of a monitor and his criterial level of response, and that in most monitoring situations, there is no decrement in efficiency over the work period, but an increase in the strictness of the response criterion. The range of tasks exhibiting either or both of these performance trends can be specified within the task classification system. In particular, it is shown that a sensitivity decrement is only obtained for 'speed' tasks with a high stimulation rate. A distinctive feature of 'speed' tasks is that target detection requires the discrimination of a change in a stimulus relative to preceding stimuli, whereas in 'closure' tasks, the information required for the discrimination of targets is presented at the same point In time. In the final study, the specification of tasks yielding sensitivity decrements is shown to be consistent with a task classification analysis of the monitoring literature. It is also demonstrated that the signal type dimension has a major influence on the consistency of individual differences in performance in different tasks. The results provide an empirical validation for the 'speed' and 'closure' categories, and suggest that individual differences are not completely task specific but are dependent on the demands common to different tasks. Task classification is therefore shovn to enable improved generalizations to be made of the factors affecting 1) performance trends over time, and 2) the consistencv of performance in different tasks. A decision theory analysis of response latencies is shown to support the view that criterion shifts are obtained in some tasks, while sensitivity shifts are obtained in others. The results of a psychophysiological study also suggest that evoked potential latency measures may provide temporal correlates of criterion shifts in monitoring tasks. Among other results, the finding that the latencies of negative responses do not increase over time is taken to invalidate arousal-based theories of performance trends over a work period. An interpretation in terms of expectancy, however, provides a more reliable explanation of criterion shifts. Although the mechanisms underlying the sensitivity decrement are not completely clear, the results rule out 'unitary' theories such as observing response and coupling theory. It is suggested that an interpretation in terms of the memory data limitations on information processing provides the most parsimonious explanation of all the results in the literature relating to sensitivity decrement. Task classification therefore enables the refinement and selection of theories of monitoring behaviour in terms of their reliability in generalizing predictions to a wide range of tasks. It is thus concluded that task classification and decision theory provide a reliable basis for the assessment and analysis of monitoring behaviour in different task situations.
Resumo:
The aim of this study is to determine if nonlinearities have affected purchasing power parity (PPP) since 1885. Also using recent advances in the econometrics of structural change we segment the sample space according to the identified breaks and look at whether the PPP condition holds in each sub-sample and whether this involves linear or non-linear adjustment. Our results suggest that during some sub-periods, PPP holds, although whether it holds or not and whether the adjustment is linear or non-linear, depends primarily on the type of exchange rate regime in operation at any point in time.
Resumo:
We present a stochastic agent-based model for the distribution of personal incomes in a developing economy. We start with the assumption that incomes are determined both by individual labour and by stochastic effects of trading and investment. The income from personal effort alone is distributed about a mean, while the income from trade, which may be positive or negative, is proportional to the trader's income. These assumptions lead to a Langevin model with multiplicative noise, from which we derive a Fokker-Planck (FP) equation for the income probability density function (IPDF) and its variation in time. We find that high earners have a power law income distribution while the low-income groups have a Levy IPDF. Comparing our analysis with the Indian survey data (obtained from the world bank website: http://go.worldbank.org/SWGZB45DN0) taken over many years we obtain a near-perfect data collapse onto our model's equilibrium IPDF. Using survey data to relate the IPDF to actual food consumption we define a poverty index (Sen A. K., Econometrica., 44 (1976) 219; Kakwani N. C., Econometrica, 48 (1980) 437), which is consistent with traditional indices, but independent of an arbitrarily chosen "poverty line" and therefore less susceptible to manipulation. Copyright © EPLA, 2010.
Resumo:
Purpose – The purpose of this paper is to investigate what sort of people become social entrepreneurs, and in what way they differ from business entrepreneurs. More importantly, to investigate in what socio-economic context entrepreneurial individuals are more likely to become social than business entrepreneurs. These questions are important for policy because there has been a shift from direct to indirect delivery of many public services in the UK, requiring a professional approach to social enterprise. Design/methodology/approach – Evidence is presented from the Global Entrepreneurship Monitor (GEM) UK survey based upon a representative sample of around 21,000 adults aged between 16 and 64 years interviewed in 2009. The authors use logistic multivariate regression techniques to identify differences between business and social entrepreneurs in demographic characteristics, effort, aspiration, use of resources, industry choice, deprivation, and organisational structure. Findings – The results show that the odds of an early-stage entrepreneur being a social rather than a business entrepreneur are reduced if they are from an ethnic minority, if they work ten hours or more per week on the venture, and if they have a family business background; while they are increased if they have higher levels of education and if they are a settled in-migrant to their area. While women social entrepreneurs are more likely than business entrepreneurs to be women, this is due to gender-based differences in time commitment to the venture. In addition, the more deprived the community they live in, the more likely women entrepreneurs are to be social than business entrepreneurs. However, this does not hold in the most deprived areas where we argue civic society is weakest and therefore not conducive to support any form of entrepreneurial endeavour based on community engagement. Originality/value – The paper's findings suggest that women may be motivated to become social entrepreneurs by a desire to improve the socio-economic environment of the community in which they live and see social enterprise creation as an appropriate vehicle with which to address local problems.
Resumo:
Failure to detect or account for structural changes in economic modelling can lead to misleading policy inferences, which can be perilous, especially for the more fragile economies of developing countries. Using three potential monetary policy instruments (Money Base, M0, and Reserve Money) for 13 member-states of the CFA Franc zone over the period 1989:11-2002:09, we investigate the magnitude of information extracted by employing data-driven techniques when analyzing breaks in time-series, rather than the simplifying practice of imposing policy implementation dates as break dates. The paper also tests Granger's (1980) aggregation theory and highlights some policy implications of the results.
Resumo:
Purpose: This study aims to build on recent research, by investigating and examining how likely it is that Chinese locals (i.e. host country nationals (HCNs)) would offer support to expatriates from India and the USA. Design/methodology/approach: Data were gathered from 222 participants in Chinese organizations, asking them to respond to questions about their willingness to offer support to expatriates. Findings: As predicted, perceived values similarity was significantly related to higher dogmatism, which had a significant positive relationship with ethnocentrism. Further, ethnocentrism had a significant negative relationship with willingness to offer support. Research limitations/implications: All data were collected from the participants at one point in time, so the study's results are subject to common method bias. Also, it only included India and the USA, as the two countries of origin of the expatriates. Practical implications: Given HCNs do not automatically offer support to all expatriates, organizations might consider sending expatriates who are culturally similar to HCNs, as they are more likely to receive support, which will help their adjustment and thus organizational effectiveness. Originality/value: This study adds to the small, but growing, number of empirical investigations of HCN willingness to support expatriates. © Emerald Group Publishing Limited.