961 resultados para Lot sizing and scheduling
Resumo:
This thesis Entitled entrepreneurship and motivation in small business sector of kerala -A study of rubber products manufacturing industry.Rubber-based industry in Kerala was established only in the first half of the 20th century.the number of licensed manufacturers in the State has increased substantially over the years, particularly in the post- independence period. 54 rubber manufacturing units in 1965-66, the number of licensed rubber-based industrial units has increased to 1300 units in 2001-02. In 2001-02 Kerala occupied the primary position in the number of rubber goods manufacturers in the country.As per the latest report of the Third All India Census of Small Scale Industries 2001-02, Kerala has the third largest number of registered small scale units in the country next after Tamil Nadu and Utter Pradesh.This study of entrepreneurship in the small-scale rubber goods manufacturing industry in Kerala compares a cross section of successful and unsuccessful entrepreneurs with respect to socio-economic characteristics and motivational dynamics. Based on a sample survey of 120 entrepreneurs of Kottayam and Ernakulam districts successful and unsuccessful entrepreneurs were selected using multiple criteria. The study provides guidelines for the development of entrepreneurship in Kerala.The results on the socio-economic survey support the hypothesis that the successful entrepreneurs will differ from unsuccessful entrepreneurs with respect to education, social contacts, initial investment, sales turnover, profits, capital employed, personal income, and number of employees.Successful entrepreneurs were found to be self~starters. Successful entrepreneurs adopted a lot more technological changes than unsuccessful entrepreneurs. Successful entrepreneurs were more innovative — the percent of successful entrepreneurs and unsuccessful entrepreneurs reporting innovations in business were 31.50 and 8.50 percent respectively.
Resumo:
The great potential for the culture of non-penaeid prawns, especially Macrobrachium rosenbergii in brackish and low saline areas of Indian coastal zone has not yet been fully exploited due to the non availability of healthy seed in adequate numbers and that too in the appropriate period. In spite of setting up several prawn hatcheries around the country to satiate the ever growing demands for the seed of the giant fresh water prawn, the supply still remains fear below the requirement mainly due to the mortality of the larvae at different stages of the larval cycle. In a larval rearing system of Macrobrachium rosenbergii, members of the family Vibrionaceae were found to be dominant flora and this was especially pronounced during the times of mortality However, to develop any sort of prophylactic and therapeutic measures, the pathogenic strains have to be segregated from the lot. This would never be possible unless they were clustered based on the principles of numerical taxonomy It is with these objectives and requirements that the present work involving phenotypic characterization of the isolates belonging to the family Vibrionaceae and working out the numerical taxonomy, determination of mole % G+C ratio, segregation of the pathogenic strains and screening antibiotics as therapeutics at times of emergency, was carried out.
Resumo:
The greatest damage inflicted by the recent recession has not necessarily been financial but emotional and psychological. There are a lot of direct and indirect societal challenges ,changes and oppurtunuties that were brought by recession everywhere including in India. In this paper an attempt is made to analyse such societal changes challenges and oppurtunities.
Resumo:
Tourism is an industry which is heavily dependent on marketing. Mouth to mouth communication has played a major role in shaping a number of destinations.This is particularly true in modern parlance.This is social networking phenomenon which is fast spreading over the internet .Many sites provide visitors a lot of freedom to express their views.Promotion of a destination depends lot on conversation and exchange of information over these social networks.This paper analyses the social networking sites their contribution to marketing tourism and hoapitality .The negetive impacts phenomena are also discussed
Resumo:
Unit Commitment Problem (UCP) in power system refers to the problem of determining the on/ off status of generating units that minimize the operating cost during a given time horizon. Since various system and generation constraints are to be satisfied while finding the optimum schedule, UCP turns to be a constrained optimization problem in power system scheduling. Numerical solutions developed are limited for small systems and heuristic methodologies find difficulty in handling stochastic cost functions associated with practical systems. This paper models Unit Commitment as a multi stage decision making task and an efficient Reinforcement Learning solution is formulated considering minimum up time /down time constraints. The correctness and efficiency of the developed solutions are verified for standard test systems
Resumo:
The paper ‘Impact of Quality on Ethics and Social Responsibility in Marketing in Industries in Kerala in the present Indian scenario’ highlights the observations, based on a descriptive research carried out in five leading industries in Kerala, in the private and public sector. Ethics and social responsibilities, practiced in these industries, are reflected in the results of the survey conducted on specific queries like awareness of products/services provided by them, total understanding of the requirements of the customer, open discussion on technical matters, accountability of employees to the society and social needs, consumer ethics vis a vis business ethics etc. Team working goes a long way, in building relations, which in turn, results in a progressive and effective marketing strategy. This assumes paramount importance, considering the severe competition we are facing in the light of liberalization, privatization and globalization, which encompasses the globe. The prediction of India becoming a lead nation, along with USA, China and Japan, in this decade, can get fructified only if we follow a very high standards of ethics and social responsibility, in all domains including marketing. Organizations like TRW.Rane, Sundaram Fasteners, TVS Motors, in Chennai are a few among others in India, who have achieved the highest distinction in quality viz Deming Prize, and these demonstrate their commitment to quality, society and humanity at large. Cost effectiveness, without jeopardizing quality has become the need of the hour and MRTP has become history. This trait is being brought out through the survey and the results speak for themselves. Unethical practices like switch and bait, not only brings shame to the organization, and country but also results in the company getting wiped out from the market. Adherence to standards like ISO 14000 helps to maintain the minimum level of social responsibility and environmental friendliness. Like quality audit, safety audit etc, social audit is being insisted in all progressive countries to ensure that the organization comply with the minimum statutory requirements. The paper also touches upon Corporate Social Responsibility practiced in the industries and this becomes crystal clear through their commitment to improve the community. Green Marketing lays a lot of importance on the three Rs of environmentalism viz Reduce, Reuse and Recycle. The objective of any business is to achieve optimal profit and this is possible only by reducing the cost as well as waste. In this context, management tools like brainstorming, suggestion schemes, benchmarking etc becomes helpful. These characteristics are brought out through the analysis of survey results. The conclusions drawn throw a lot of information on the desirable practices with respect to Ethics and Social Responsibility in Marketing
Resumo:
This thesis is divided in to 9 chapters and deals with the modification of TiO2 for various applications include photocatalysis, thermal reaction, photovoltaics and non-linear optics. Chapter 1 involves a brief introduction of the topic of study. An introduction to the applications of modified titania systems in various fields are discussed concisely. Scope and objectives of the present work are also discussed in this chapter. Chapter 2 explains the strategy adopted for the synthesis of metal, nonmetal co-doped TiO2 systems. Hydrothermal technique was employed for the preparation of the co-doped TiO2 system, where Ti[OCH(CH3)2]4, urea and metal nitrates were used as the sources for TiO2, N and metals respectively. In all the co-doped systems, urea to Ti[OCH(CH3)2]4 was taken in a 1:1 molar ratio and varied the concentration of metals. Five different co-doped catalytic systems and for each catalysts, three versions were prepared by varying the concentration of metals. A brief explanation of physico-chemical techniques used for the characterization of the material was also presented in this chapter. This includes X-ray Diffraction (XRD), Raman Spectroscopy, FTIR analysis, Thermo Gravimetric Analysis, Energy Dispersive X-ray Analysis (EDX), Scanning Electron Microscopy(SEM), UV-Visible Diffuse Reflectance Spectroscopy (UV-Vis DRS), Transmission Electron Microscopy (TEM), BET Surface Area Measurements and X-ray Photoelectron Spectroscopy (XPS). Chapter 3 contains the results and discussion of characterization techniques used for analyzing the prepared systems. Characterization is an inevitable part of materials research. Determination of physico-chemical properties of the prepared materials using suitable characterization techniques is very crucial to find its exact field of application. It is clear from the XRD pattern that photocatalytically active anatase phase dominates in the calcined samples with peaks at 2θ values around 25.4°, 38°, 48.1°, 55.2° and 62.7° corresponding to (101), (004), (200), (211) and (204) crystal planes (JCPDS 21-1272) respectively. But in the case of Pr-N-Ti sample, a new peak was observed at 2θ = 30.8° corresponding to the (121) plane of the polymorph brookite. There are no visible peaks corresponding to dopants, which may be due to their low concentration or it is an indication of the better dispersion of impurities in the TiO2. Crystallite size of the sample was calculated from Scherrer equation byusing full width at half maximum (FWHM) of the (101) peak of the anatase phase. Crystallite size of all the co-doped TiO2 was found to be lower than that of bare TiO2 which indicates that the doping of metal ions having higher ionic radius into the lattice of TiO2 causes some lattice distortion which suppress the growth of TiO2 nanoparticles. The structural identity of the prepared system obtained from XRD pattern is further confirmed by Raman spectra measurements. Anatase has six Raman active modes. Band gap of the co-doped system was calculated using Kubelka-Munk equation and that was found to be lower than pure TiO2. Stability of the prepared systems was understood from thermo gravimetric analysis. FT-IR was performed to understand the functional groups as well as to study the surface changes occurred during modification. EDX was used to determine the impurities present in the system. The EDX spectra of all the co-doped samples show signals directly related to the dopants. Spectra of all the co-doped systems contain O and Ti as the main components with low concentrations of doped elements. Morphologies of the prepared systems were obtained from SEM and TEM analysis. Average particle size of the systems was drawn from histogram data. Electronic structures of the samples were identified perfectly from XPS measurements. Chapter 4 describes the photocatalytic degradation of herbicides Atrazine and Metolachlor using metal, non-metal co-doped titania systems. The percentage of degradation was analyzed by HPLC technique. Parameters such as effect of different catalysts, effect of time, effect of catalysts amount and reusability studies were discussed. Chapter 5 deals with the photo-oxidation of some anthracene derivatives by co-doped catalytic systems. These anthracene derivatives come underthe category of polycyclic aromatic hydrocarbons (PAH). Due to the presence of stable benzene rings, most of the PAH show strong inhibition towards biological degradation and the common methods employed for their removal. According to environmental protection agency, most of the PAH are highly toxic in nature. TiO2 photochemistry has been extensively investigated as a method for the catalytic conversion of such organic compounds, highlighting the potential of thereof in the green chemistry. There are actually two methods for the removal of pollutants from the ecosystem. Complete mineralization is the one way to remove pollutants. Conversion of toxic compounds to another compound having toxicity less than the initial starting compound is the second way. Here in this chapter, we are concentrating on the second aspect. The catalysts used were Gd(1wt%)-N-Ti, Pd(1wt%)-N-Ti and Ag(1wt%)-N-Ti. Here we were very successfully converted all the PAH to anthraquinone, a compound having diverse applications in industrial as well as medical fields. Substitution of 10th position of desired PAH by phenyl ring reduces the feasibility of photo reaction and produced 9-hydroxy 9-phenyl anthrone (9H9PA) as an intermediate species. The products were separated and purified by column chromatography using 70:30 hexane/DCM mixtures as the mobile phase and the resultant products were characterized thoroughly by 1H NMR, IR spectroscopy and GCMS analysis. Chapter 6 elucidates the heterogeneous Suzuki coupling reaction by Cu/Pd bimetallic supported on TiO2. Sol-Gel followed by impregnation method was adopted for the synthesis of Cu/Pd-TiO2. The prepared system was characterized by XRD, TG-DTG, SEM, EDX, BET Surface area and XPS. The product was separated and purified by column chromatography using hexane as the mobile phase. Maximum isolated yield of biphenyl of around72% was obtained in DMF using Cu(2wt%)-Pd(4wt%)-Ti as the catalyst. In this reaction, effective solvent, base and catalyst were found to be DMF, K2CO3 and Cu(2wt%)-Pd(4wt%)-Ti respectively. Chapter 7 gives an idea about the photovoltaic (PV) applications of TiO2 based thin films. Due to energy crisis, the whole world is looking for a new sustainable energy source. Harnessing solar energy is one of the most promising ways to tackle this issue. The present dominant photovoltaic (PV) technologies are based on inorganic materials. But the high material, low power conversion efficiency and manufacturing cost limits its popularization. A lot of research has been conducted towards the development of low-cost PV technologies, of which organic photovoltaic (OPV) devices are one of the promising. Here two TiO2 thin films having different thickness were prepared by spin coating technique. The prepared films were characterized by XRD, AFM and conductivity measurements. The thickness of the films was measured by Stylus Profiler. This chapter mainly concentrated on the fabrication of an inverted hetero junction solar cell using conducting polymer MEH-PPV as photo active layer. Here TiO2 was used as the electron transport layer. Thin films of MEH-PPV were also prepared using spin coating technique. Two fullerene derivatives such as PCBM and ICBA were introduced into the device in order to improve the power conversion efficiency. Effective charge transfer between the conducting polymer and ICBA were understood from fluorescence quenching studies. The fabricated Inverted hetero junction exhibited maximum power conversion efficiency of 0.22% with ICBA as the acceptor molecule. Chapter 8 narrates the third order order nonlinear optical properties of bare and noble metal modified TiO2 thin films. Thin films were fabricatedby spray pyrolysis technique. Sol-Gel derived Ti[OCH(CH3)2]4 in CH3CH2OH/CH3COOH was used as the precursor for TiO2. The precursors used for Au, Ag and Pd were the aqueous solutions of HAuCl4, AgNO3 and Pd(NO3)2 respectively. The prepared films were characterized by XRD, SEM and EDX. The nonlinear optical properties of the prepared materials were investigated by Z-Scan technique comprising of Nd-YAG laser (532 nm,7 ns and10 Hz). The non-linear coefficients were obtained by fitting the experimental Z-Scan plot with the theoretical plots. Nonlinear absorption is a phenomenon defined as a nonlinear change (increase or decrease) in absorption with increasing of intensity. This can be mainly divided into two types: saturable absorption (SA) and reverse saturable absorption (RSA). Depending on the pump intensity and on the absorption cross- section at the excitation wavelength, most molecules show non- linear absorption. With increasing intensity, if the excited states show saturation owing to their long lifetimes, the transmission will show SA characteristics. Here absorption decreases with increase of intensity. If, however, the excited state has strong absorption compared with that of the ground state, the transmission will show RSA characteristics. Here in our work most of the materials show SA behavior and some materials exhibited RSA behavior. Both these properties purely depend on the nature of the materials and alignment of energy states within them. Both these SA and RSA have got immense applications in electronic devices. The important results obtained from various studies are presented in chapter 9.
Resumo:
Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.
Resumo:
During recent years, quantum information processing and the study of N−qubit quantum systems have attracted a lot of interest, both in theory and experiment. Apart from the promise of performing efficient quantum information protocols, such as quantum key distribution, teleportation or quantum computation, however, these investigations also revealed a great deal of difficulties which still need to be resolved in practise. Quantum information protocols rely on the application of unitary and non–unitary quantum operations that act on a given set of quantum mechanical two-state systems (qubits) to form (entangled) states, in which the information is encoded. The overall system of qubits is often referred to as a quantum register. Today the entanglement in a quantum register is known as the key resource for many protocols of quantum computation and quantum information theory. However, despite the successful demonstration of several protocols, such as teleportation or quantum key distribution, there are still many open questions of how entanglement affects the efficiency of quantum algorithms or how it can be protected against noisy environments. To facilitate the simulation of such N−qubit quantum systems and the analysis of their entanglement properties, we have developed the Feynman program. The program package provides all necessary tools in order to define and to deal with quantum registers, quantum gates and quantum operations. Using an interactive and easily extendible design within the framework of the computer algebra system Maple, the Feynman program is a powerful toolbox not only for teaching the basic and more advanced concepts of quantum information but also for studying their physical realization in the future. To this end, the Feynman program implements a selection of algebraic separability criteria for bipartite and multipartite mixed states as well as the most frequently used entanglement measures from the literature. Additionally, the program supports the work with quantum operations and their associated (Jamiolkowski) dual states. Based on the implementation of several popular decoherence models, we provide tools especially for the quantitative analysis of quantum operations. As an application of the developed tools we further present two case studies in which the entanglement of two atomic processes is investigated. In particular, we have studied the change of the electron-ion spin entanglement in atomic photoionization and the photon-photon polarization entanglement in the two-photon decay of hydrogen. The results show that both processes are, in principle, suitable for the creation and control of entanglement. Apart from process-specific parameters like initial atom polarization, it is mainly the process geometry which offers a simple and effective instrument to adjust the final state entanglement. Finally, for the case of the two-photon decay of hydrogenlike systems, we study the difference between nonlocal quantum correlations, as given by the violation of the Bell inequality and the concurrence as a true entanglement measure.
Resumo:
In composite agricultural materials such as grass, tee, medicinal plants; leaves and stems have a different drying time. By this behavior, after leaving the dryer, the stems may have greater moisture content than desired, while the leaves one minor, which can cause either the appearance of fungi or the collapse of the over-dried material. Taking into account that a lot of grass is dehydrated in forced air dryers, especially rotary drum dryers, this research was developed in order to establish conditions enabling to make a separation of the components during the drying process in order to provide a homogeneous product at the end. For this, a rotary dryer consisting of three concentric cylinders and a circular sieve aligned with the more internal cylinder was proposed; so that, once material enters into the dryer in the area of the inner cylinder, stems pass through sieve to the middle and then continue towards the external cylinder, while the leaves continue by the inner cylinder. For this project, a mixture of Ryegrass and White Clover was used. The characteristics of the components of a mixture were: Drying Rate in thin layer and in rotation, Bulk density, Projected Area, Terminal velocity, weight/Area Ratio, Flux through Rotary sieve. Three drying temperatures; 40°C, 60° C and 80° C, and three rotation speeds; 10 rpm, 20 rpm and 40 rpm were evaluated. It was found that the differences in drying time are the less at 80 °C when the dryer rotates at 40 rpm. Above this speed, the material adheres to the walls of the dryer or sieve and does not flow. According to the measurements of terminal velocity of stems and leaves of the components of the mixture, the speed of the air should be less than 1.5 m s-1 in the inner drum for the leaves and less than 4.5 m s-1 in middle and outer drums for stems, in such way that only the rotational movement of the dryer moves the material and achieves a greater residence time. In other hand, the best rotary sieve separation efficiencies were achieved when the material is dry, but the results are good in all the moisture contents. The best rotary speed of sieve is within the critical rotational speed, i.e. 20 rpm. However, the rotational speed of the dryer, including the sieve in line with the inner cylinder should be 10 rpm or less in order to achieve the greatest residence times of the material inside the dryer and the best agitation through the use of lifting flights. With a finite element analysis of a dryer prototype, using an air flow allowing speeds of air already stated, I was found that the best performance occurs when, through a cover, air enters the dryer front of the Middle cylinder and when the inner cylinder is formed in its entirety through a sieve. This way, air flows in almost equal amounts by both the middle and external cylinders, while part of the air in the Middle cylinder passes through the sieve towards the inner cylinder. With this, leaves do not adhere to the sieve and flow along drier, thanks to the rotating movement of the drums and the showering caused by the lifting flights. In these conditions, the differences in drying time are reduced to 60 minutes, but the residence time is higher for the stems than for leaves, therefore the components of the mixture of grass run out of the dryer with the same desired moisture content.
Resumo:
Scheduling tasks to efficiently use the available processor resources is crucial to minimizing the runtime of applications on shared-memory parallel processors. One factor that contributes to poor processor utilization is the idle time caused by long latency operations, such as remote memory references or processor synchronization operations. One way of tolerating this latency is to use a processor with multiple hardware contexts that can rapidly switch to executing another thread of computation whenever a long latency operation occurs, thus increasing processor utilization by overlapping computation with communication. Although multiple contexts are effective for tolerating latency, this effectiveness can be limited by memory and network bandwidth, by cache interference effects among the multiple contexts, and by critical tasks sharing processor resources with less critical tasks. This thesis presents techniques that increase the effectiveness of multiple contexts by intelligently scheduling threads to make more efficient use of processor pipeline, bandwidth, and cache resources. This thesis proposes thread prioritization as a fundamental mechanism for directing the thread schedule on a multiple-context processor. A priority is assigned to each thread either statically or dynamically and is used by the thread scheduler to decide which threads to load in the contexts, and to decide which context to switch to on a context switch. We develop a multiple-context model that integrates both cache and network effects, and shows how thread prioritization can both maintain high processor utilization, and limit increases in critical path runtime caused by multithreading. The model also shows that in order to be effective in bandwidth limited applications, thread prioritization must be extended to prioritize memory requests. We show how simple hardware can prioritize the running of threads in the multiple contexts, and the issuing of requests to both the local memory and the network. Simulation experiments show how thread prioritization is used in a variety of applications. Thread prioritization can improve the performance of synchronization primitives by minimizing the number of processor cycles wasted in spinning and devoting more cycles to critical threads. Thread prioritization can be used in combination with other techniques to improve cache performance and minimize cache interference between different working sets in the cache. For applications that are critical path limited, thread prioritization can improve performance by allowing processor resources to be devoted preferentially to critical threads. These experimental results show that thread prioritization is a mechanism that can be used to implement a wide range of scheduling policies.
Optimal Methodology for Synchronized Scheduling of Parallel Station Assembly with Air Transportation
Resumo:
We present an optimal methodology for synchronized scheduling of production assembly with air transportation to achieve accurate delivery with minimized cost in consumer electronics supply chain (CESC). This problem was motivated by a major PC manufacturer in consumer electronics industry, where it is required to schedule the delivery requirements to meet the customer needs in different parts of South East Asia. The overall problem is decomposed into two sub-problems which consist of an air transportation allocation problem and an assembly scheduling problem. The air transportation allocation problem is formulated as a Linear Programming Problem with earliness tardiness penalties for job orders. For the assembly scheduling problem, it is basically required to sequence the job orders on the assembly stations to minimize their waiting times before they are shipped by flights to their destinations. Hence the second sub-problem is modelled as a scheduling problem with earliness penalties. The earliness penalties are assumed to be independent of the job orders.
Resumo:
In this article we compare regression models obtained to predict PhD students’ academic performance in the universities of Girona (Spain) and Slovenia. Explanatory variables are characteristics of PhD student’s research group understood as an egocentered social network, background and attitudinal characteristics of the PhD students and some characteristics of the supervisors. Academic performance was measured by the weighted number of publications. Two web questionnaires were designed, one for PhD students and one for their supervisors and other research group members. Most of the variables were easily comparable across universities due to the careful translation procedure and pre-tests. When direct comparison was not possible we created comparable indicators. We used a regression model in which the country was introduced as a dummy coded variable including all possible interaction effects. The optimal transformations of the main and interaction variables are discussed. Some differences between Slovenian and Girona universities emerge. Some variables like supervisor’s performance and motivation for autonomy prior to starting the PhD have the same positive effect on the PhD student’s performance in both countries. On the other hand, variables like too close supervision by the supervisor and having children have a negative influence in both countries. However, we find differences between countries when we observe the motivation for research prior to starting the PhD which increases performance in Slovenia but not in Girona. As regards network variables, frequency of supervisor advice increases performance in Slovenia and decreases it in Girona. The negative effect in Girona could be explained by the fact that additional contacts of the PhD student with his/her supervisor might indicate a higher workload in addition to or instead of a better advice about the dissertation. The number of external student’s advice relationships and social support mean contact intensity are not significant in Girona, but they have a negative effect in Slovenia. We might explain the negative effect of external advice relationships in Slovenia by saying that a lot of external advice may actually result from a lack of the more relevant internal advice
Resumo:
El posicionamiento estratégico se define como el punto de partida de toda reflexión de la organización (por más pequeña que sea) que pretende poner un lugar dentro de la empresa elgive back del propio rendimiento. En el caso de la estrategia colectiva, no sólo hay una visión de un dirigente (manager), al contrario hay una gran cantidad de visiones de diferentes gestores, que tendrá que tomar decisiones comunes beneficiosas para sus propios intereses y el común de los intereses de cada empresa. Por lo tanto, es esencial que la situación actual y los objetivos a alcanzar estén claramente definidos desde el comienzo de la elaboración de la estrategia, para evitar las posibles divergencias que puedan poner en riesgo la coherencia de la estrategia. Con los problemas encontrados en la PYME francesa, como el inicio de la actividad, los problemas financieros, la integración organizativa y la competencia y el desarrollo de productos, la estrategia colectiva aparece como una posible solución que permite a la PYME perdurar en el tiempo. En Francia, impulsada por el Gobierno y otras instituciones financieras y administrativas, esta estrategia ha conseguido resultados que antes no se habían pensado, como lo demuestra el estudio de modelo urbano el cuál es presentado en esta investigación. Esta es la razón y el por qué se eligió este tema.
Resumo:
This paper presents a control strategy for blood glucose(BG) level regulation in type 1 diabetic patients. To design the controller, model-based predictive control scheme has been applied to a newly developed diabetic patient model. The controller is provided with a feedforward loop to improve meal compensation, a gain-scheduling scheme to account for different BG levels, and an asymmetric cost function to reduce hypoglycemic risk. A simulation environment that has been approved for testing of artificial pancreas control algorithms has been used to test the controller. The simulation results show a good controller performance in fasting conditions and meal disturbance rejection, and robustness against model–patient mismatch and errors in meal estimation