942 resultados para Lot sizing and scheduling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nanosized stannic oxide particles modified with a layer of DBS were successfully prepared through the colloidal chemical method and their microstructures were characterized. FTIR and XPS were used for the determination of the main components. It can be proved that the nanosized SnO2 particles were capped by DBS. The sizes of particle were determined by TEM and XRD. By the investigation of XPS, we can conclude that there are a lot of oxygen vacancies in the surface of the nanoparticulates. Based on this conclusion, the ESR signal of the sample can be explained.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The usage of RNA interference for gene knockdown in zebrafish through expression of the small interfering RNA mediators from DNA vectors has created a lot of excitement in the research community. In this work, the ability of human cytomegalovirus immediate early promoter (CMV promoter)-driven short hairpin RNA (shRNA) expression vector to induce shRNA against vascular endothelial growth factor (VEGF) gene in zebrafish was tested, and its effects on VEGF-mediated vasculogenesis and angiogenesis were evaluated. Altogether four vectors targeting various locations of VEGF gene were constructed, and pSI-V4 was proven to be the most effective one. Microinjection of pSI-V4 into the zebrafish embryos resulted in defective vascular formation and down regulation of VEGF expression. In situ hybridization analysis indicated that silencing VEGF gene expression by pSI-V4 resulted in down regulation of neuropilin-1 (NRP1), a potent VEGF receptor. Knockdown of VEGF expression by morpholino gave the same result. This provided evidence that the VEGF-mediated angiogenesis in zebrafish was in part dependent on NRP1 expression. The results contributed to a better understanding of molecular mechanisms of cardiovascular development and provided a potential promoter for making inducible knockdown in zebrafish.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Zenisu deep-sea channel originated from a volcanic arc region, Izu-Ogasawara Island Arc, and vanished in the Shikoku Basin of the Philippine Sea. According to the swath bathymetry, the deep-sea channel can be divided into three,segments. They are Zenisu canyon, E-W fan channel and trough-axis channel. A lot of volcanic detritus were deposited in the Zenisu Trough via the deep-sea channel because it originated from volcanic arc settings. On the basis of the swath bathymetry, submersible and seismic reflection data, the deposits are characterized by turbidite and debrite deposits as those in the other major deep-sea channels. Erosion or few sediments were observed in the Zenisu canyon, whereas a lot of turbidites and debrites occurred in the E-W channel and trough axis channel. Cold seep communities, active fault and fluid flow were discovered along the lower slope of the Zenisu Ridge. Vertical sedimentary sequences in the Zenisu Trough consist of the four post-rift sequence units of the Shikoku Basin, among which Units A and B are two turbidite units. The development of Zenisu canyon is controlled by the N-S shear fault, the E-W fan channel is related to the E-W shear fault, and the trough-axis channel is related to the subsidence of central basin.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to explore the inhibitory mechanism of coumarins toward aldose reductase (ALR2), AutoDock and Gromacs software were used for docking and molecular dynamics studies on 14 coumarins (CM) and ALR2 protease. The docking results indicate that residues TYR48, HIS110, and TRP111 construct the active pocket of ALR2 and, besides van der Waals and hydrophobic interaction, CM mainly interact with ALR2 by forming hydrogen bonds to cause inhibitory behavior. Except for CM1, all the other coumarins take the lactone part as acceptor to build up the hydrogen bond network with active-pocket residues. Unlike CM3, which has two comparable binding modes with ALR2, most coumarins only have one dominant orientation in their binding sites. The molecular dynamics calculation, based on the docking results, implies that the orientations of CM in the active pocket show different stabilities. Orientation of CM1 and CM3a take an unstable binding mode with ALR2; their conformations and RMSDs relative to ALR2 change a lot with the dynamic process. While the remaining CM are always hydrogen-bonded with residues TYR48 and HIS110 through the carbonyl O atom of the lactone group during the whole process, they retain the original binding mode and gradually reach dynamic equilibrium.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

针对对工件有不同交货期要求 ,并对提前 /拖期工件进行惩罚的一类单机调度问题 ,提出了基于遗传算法的优化方法 .提出一种基于“非”一致次序交叉算子的遗传算法 ,用于排序优化 ;在分析了惩罚函数性质的基础上 ,给出了最优开工时间算法 .对不同规模的调度问题 ,应用本文提出的算法与其它算法进行了比较 ,结果表明该方法具有优良的性能 .

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a new scheduling algorithm for the flexible manufacturing cell is presented, which is a discrete time control method with fixed length control period combining with event interruption. At the flow control level we determine simultaneously the production mix and the proportion of parts to be processed through each route. The simulation results for a hypothetical manufacturing cell are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the problem of detecting a large number of different classes of objects in cluttered scenes. Traditional approaches require applying a battery of different classifiers to the image, at multiple locations and scales. This can be slow and can require a lot of training data, since each classifier requires the computation of many different image features. In particular, for independently trained detectors, the (run-time) computational complexity, and the (training-time) sample complexity, scales linearly with the number of classes to be detected. It seems unlikely that such an approach will scale up to allow recognition of hundreds or thousands of objects. We present a multi-class boosting procedure (joint boosting) that reduces the computational and sample complexity, by finding common features that can be shared across the classes (and/or views). The detectors for each class are trained jointly, rather than independently. For a given performance level, the total number of features required, and therefore the computational cost, is observed to scale approximately logarithmically with the number of classes. The features selected jointly are closer to edges and generic features typical of many natural structures instead of finding specific object parts. Those generic features generalize better and reduce considerably the computational cost of an algorithm for multi-class object detection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents SodaBot, a general-purpose software agent user-environment and construction system. Its primary component is the basic software agent --- a computational framework for building agents which is essentially an agent operating system. We also present a new language for programming the basic software agent whose primitives are designed around human-level descriptions of agent activity. Via this programming language, users can easily implement a wide-range of typical software agent applications, e.g. personal on-line assistants and meeting scheduling agents. The SodaBot system has been implemented and tested, and its description comprises the bulk of this thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two methods of obtaining approximate solutions to the classic General Job-shop Scheduling Program are investigated. The first method is iterative. A sampling of the solution space is used to decide which of a collection of space pruning constraints are consistent with "good" schedules. The selected space pruning constraints are then used to reduce the search space and the sampling is repeated. This approach can be used either to verify whether some set of space pruning constraints can prune with discrimination or to generate solutions directly. Schedules can be represented as trajectories through a Cartesian space. Under the objective criteria of Minimum maximum Lateness family of "good" schedules (trajectories) are geometric neighbors (reside with some "tube") in this space. This second method of generating solutions takes advantage of this adjacency by pruning the space from the outside in thus converging gradually upon this "tube." One the average this methods significantly outperforms an array of the Priority Dispatch rules when the object criteria is that of Minimum Maximum Lateness. It also compares favorably with a recent relaxation procedure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Malicious software (malware) have significantly increased in terms of number and effectiveness during the past years. Until 2006, such software were mostly used to disrupt network infrastructures or to show coders’ skills. Nowadays, malware constitute a very important source of economical profit, and are very difficult to detect. Thousands of novel variants are released every day, and modern obfuscation techniques are used to ensure that signature-based anti-malware systems are not able to detect such threats. This tendency has also appeared on mobile devices, with Android being the most targeted platform. To counteract this phenomenon, a lot of approaches have been developed by the scientific community that attempt to increase the resilience of anti-malware systems. Most of these approaches rely on machine learning, and have become very popular also in commercial applications. However, attackers are now knowledgeable about these systems, and have started preparing their countermeasures. This has lead to an arms race between attackers and developers. Novel systems are progressively built to tackle the attacks that get more and more sophisticated. For this reason, a necessity grows for the developers to anticipate the attackers’ moves. This means that defense systems should be built proactively, i.e., by introducing some security design principles in their development. The main goal of this work is showing that such proactive approach can be employed on a number of case studies. To do so, I adopted a global methodology that can be divided in two steps. First, understanding what are the vulnerabilities of current state-of-the-art systems (this anticipates the attacker’s moves). Then, developing novel systems that are robust to these attacks, or suggesting research guidelines with which current systems can be improved. This work presents two main case studies, concerning the detection of PDF and Android malware. The idea is showing that a proactive approach can be applied both on the X86 and mobile world. The contributions provided on this two case studies are multifolded. With respect to PDF files, I first develop novel attacks that can empirically and optimally evade current state-of-the-art detectors. Then, I propose possible solutions with which it is possible to increase the robustness of such detectors against known and novel attacks. With respect to the Android case study, I first show how current signature-based tools and academically developed systems are weak against empirical obfuscation attacks, which can be easily employed without particular knowledge of the targeted systems. Then, I examine a possible strategy to build a machine learning detector that is robust against both empirical obfuscation and optimal attacks. Finally, I will show how proactive approaches can be also employed to develop systems that are not aimed at detecting malware, such as mobile fingerprinting systems. In particular, I propose a methodology to build a powerful mobile fingerprinting system, and examine possible attacks with which users might be able to evade it, thus preserving their privacy. To provide the aforementioned contributions, I co-developed (with the cooperation of the researchers at PRALab and Ruhr-Universität Bochum) various systems: a library to perform optimal attacks against machine learning systems (AdversariaLib), a framework for automatically obfuscating Android applications, a system to the robust detection of Javascript malware inside PDF files (LuxOR), a robust machine learning system to the detection of Android malware, and a system to fingerprint mobile devices. I also contributed to develop Android PRAGuard, a dataset containing a lot of empirical obfuscation attacks against the Android platform. Finally, I entirely developed Slayer NEO, an evolution of a previous system to the detection of PDF malware. The results attained by using the aforementioned tools show that it is possible to proactively build systems that predict possible evasion attacks. This suggests that a proactive approach is crucial to build systems that provide concrete security against general and evasion attacks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This document describes two sets of benchmark problem instances for the job shop scheduling problem. Each set of instances is supplied as a compressed (zipped) archive containing a single CSV file for each problem instance using the format described in http://rollproject.org/jssp/jsspGen.pdf

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe a new hyper-heuristic method NELLI-GP for solving job-shop scheduling problems (JSSP) that evolves an ensemble of heuristics. The ensemble adopts a divide-and-conquer approach in which each heuristic solves a unique subset of the instance set considered. NELLI-GP extends an existing ensemble method called NELLI by introducing a novel heuristic generator that evolves heuristics composed of linear sequences of dispatching rules: each rule is represented using a tree structure and is itself evolved. Following a training period, the ensemble is shown to outperform both existing dispatching rules and a standard genetic programming algorithm on a large set of new test instances. In addition, it obtains superior results on a set of 210 benchmark problems from the literature when compared to two state-of-the-art hyperheuristic approaches. Further analysis of the relationship between heuristics in the evolved ensemble and the instances each solves provides new insights into features that might describe similar instances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes an algorithm for scheduling packets in real-time multimedia data streams. Common to these classes of data streams are service constraints in terms of bandwidth and delay. However, it is typical for real-time multimedia streams to tolerate bounded delay variations and, in some cases, finite losses of packets. We have therefore developed a scheduling algorithm that assumes streams have window-constraints on groups of consecutive packet deadlines. A window-constraint defines the number of packet deadlines that can be missed in a window of deadlines for consecutive packets in a stream. Our algorithm, called Dynamic Window-Constrained Scheduling (DWCS), attempts to guarantee no more than x out of a window of y deadlines are missed for consecutive packets in real-time and multimedia streams. Using DWCS, the delay of service to real-time streams is bounded even when the scheduler is overloaded. Moreover, DWCS is capable of ensuring independent delay bounds on streams, while at the same time guaranteeing minimum bandwidth utilizations over tunable and finite windows of time. We show the conditions under which the total demand for link bandwidth by a set of real-time (i.e., window-constrained) streams can exceed 100% and still ensure all window-constraints are met. In fact, we show how it is possible to guarantee worst-case per-stream bandwidth and delay constraints while utilizing all available link capacity. Finally, we show how best-effort packets can be serviced with fast response time, in the presence of window-constrained traffic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

TCP performance degrades when end-to-end connections extend over wireless connections-links which are characterized by high bit error rate and intermittent connectivity. Such link characteristics can significantly degrade TCP performance as the TCP sender assumes wireless losses to be congestion losses resulting in unnecessary congestion control actions. Link errors can be reduced by increasing transmission power, code redundancy (FEC) or number of retransmissions (ARQ). But increasing power costs resources, increasing code redundancy reduces available channel bandwidth and increasing persistency increases end-to-end delay. The paper proposes a TCP optimization through proper tuning of power management, FEC and ARQ in wireless environments (WLAN and WWAN). In particular, we conduct analytical and numerical analysis taking into "wireless-aware" TCP) performance under different settings. Our results show that increasing power, redundancy and/or retransmission levels always improves TCP performance by reducing link-layer losses. However, such improvements are often associated with cost and arbitrary improvement cannot be realized without paying a lot in return. It is therefore important to consider some kind of net utility function that should be optimized, thus maximizing throughput at the least possible cost.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wireless sensor networks have recently emerged as enablers of important applications such as environmental, chemical and nuclear sensing systems. Such applications have sophisticated spatial-temporal semantics that set them aside from traditional wireless networks. For example, the computation of temperature averaged over the sensor field must take into account local densities. This is crucial since otherwise the estimated average temperature can be biased by over-sampling areas where a lot more sensors exist. Thus, we envision that a fundamental service that a wireless sensor network should provide is that of estimating local densities. In this paper, we propose a lightweight probabilistic density inference protocol, we call DIP, which allows each sensor node to implicitly estimate its neighborhood size without the explicit exchange of node identifiers as in existing density discovery schemes. The theoretical basis of DIP is a probabilistic analysis which gives the relationship between the number of sensor nodes contending in the neighborhood of a node and the level of contention measured by that node. Extensive simulations confirm the premise of DIP: it can provide statistically reliable and accurate estimates of local density at a very low energy cost and constant running time. We demonstrate how applications could be built on top of our DIP-based service by computing density-unbiased statistics from estimated local densities.