934 resultados para Branch and bound algorithms
Resumo:
In this work mathematical programming models for structural and operational optimisation of energy systems are developed and applied to a selection of energy technology problems. The studied cases are taken from industrial processes and from large regional energy distribution systems. The models are based on Mixed Integer Linear Programming (MILP), Mixed Integer Non-Linear Programming (MINLP) and on a hybrid approach of a combination of Non-Linear Programming (NLP) and Genetic Algorithms (GA). The optimisation of the structure and operation of energy systems in urban regions is treated in the work. Firstly, distributed energy systems (DES) with different energy conversion units and annual variations of consumer heating and electricity demands are considered. Secondly, district cooling systems (DCS) with cooling demands for a large number of consumers are studied, with respect to a long term planning perspective regarding to given predictions of the consumer cooling demand development in a region. The work comprises also the development of applications for heat recovery systems (HRS), where paper machine dryer section HRS is taken as an illustrative example. The heat sources in these systems are moist air streams. Models are developed for different types of equipment price functions. The approach is based on partitioning of the overall temperature range of the system into a number of temperature intervals in order to take into account the strong nonlinearities due to condensation in the heat recovery exchangers. The influence of parameter variations on the solutions of heat recovery systems is analysed firstly by varying cost factors and secondly by varying process parameters. Point-optimal solutions by a fixed parameter approach are compared to robust solutions with given parameter variation ranges. In the work enhanced utilisation of excess heat in heat recovery systems with impingement drying, electricity generation with low grade excess heat and the use of absorption heat transformers to elevate a stream temperature above the excess heat temperature are also studied.
Resumo:
Thirty heads with the neck segment of Caiman latirostris were used. The animals were provided from a creation center called Mister Caiman, under the authorization of the Brazilian Institute of Environment and Renewable Natural Resources (Ibama). Animals were sacrificed according to the slaughtering routine of the abattoir, and the heads were sectioned at the level of the third cervical vertebra. The arterial system was washed with cold saline solution, with drainage through jugular veins. Subsequently, the system was filled with red colored latex injection. Pieces were than fixed in 20% formaldehyde, for seven days. The brains were removed, with a spinal cord segment, the duramater removed and the arteries dissected. At the level of the hypophysis, the internal carotid artery gave off a rostral branch, and a short caudal branch, continuing, naturally, as the caudal cerebral artery. This artery projected laterodorsalwards and, as it overpassed the optic tract, gave off its I (the first) central branch. Penetrated in the cerebral transverse fissure, emitting the diencephalic artery and next its II (second) central branch. Still inside the fissure, originated occipital hemispheric branches and a pineal branch. Emerged from the cerebral transverse fissure, over the occipital pole of the cerebral hemisphere. Projected rostralwards, sagital to the cerebral longitudinal fissure, as interhemispheric artery. This artery gave off medial and convex hemispheric branches to the respective surfaces of the cerebral hemispheres, anastomosed with its contralateral homologous, forming the common ethmoidal artery. This artery entered the fissure between the olfactory peduncles, emerging ventrally and dividing into ethmoidal arteries, right and left, which progressed towards the nasal cavities, vascularizing them. The territory of the caudal cerebral artery included the most caudal area of the base of the cerebral hemisphere, its convex surface, the olfactory peduncles and bulbs, the choroid plexuses and the diencephalus with its parietal organs.
Resumo:
Thirty Meleagris gallopavo heads with their neck segments were used. Animals were contained and euthanized with the association of mebezonium iodide, embutramide and tetracaine hydrochloride (T 61, Intervet ) by intravenous injection. The arterial system was rinsed with cold saline solution (15°C), with 5000IU heparin and filled with red-colored latex. The samples were fixed in 20% formaldehyde for seven days. The brains were removed with a segment of cervical spinal cord and after, the dura-mater was removed and the arteries dissected. The cerebral carotid arteries, after the intercarotid anastomosis, were projected around the hypophysis, until they reached the tuber cinereum and divided into their terminal branches, the caudal branch and the rostral branch. The rostral branch was projected rostrolateralwards and gave off, in sequence, two collateral branches, the caudal cerebral and the middle cerebral arteries and the terminal branch was as cerebroethmoidal artery. The caudal cerebral artery of one antimere formed the interhemispheric artery, which gave off dorsal hemispheric branches to the convex surface of both antimeres. Its dorsal tectal mesencephalic branch, of only one antimere, originated the dorsal cerebellar artery. In the interior of the cerebral transverse fissure, after the origin of the dorsal tectal mesencephalic artery, the caudal cerebral artery emitted occipital hemispheric branches, pineal branches and medial hemispheric branches, on both antimeres. The caudal cerebral artery's territory comprehended the entire surface of the dorsal hemioptic lobe, the rostral surface of the cerebellum, the diencephalic structures, the caudal pole and the medial surface of the cerebral hemisphere and in the convex surface, the sagittal eminence except for its most rostral third. Due to the asymmetry found in the caudal cerebral arteries' ramifications, the models were classified into three types and their respective subtypes.
Resumo:
The effect of hypoxia on the levels of glycogen, glucose and lactate as well as the activities and binding of glycolytic and associated enzymes to subcellular structures was studied in brain, liver and white muscle of the teleost fish, Scorpaena porcus. Hypoxia exposure decreased glucose levels in liver from 2.53 to 1.70 µmol/g wet weight and in muscle led to its increase from 3.64 to 25.1 µmol/g wet weight. Maximal activities of several enzymes in brain were increased by hypoxia: hexokinase by 23%, phosphoglucoisomerase by 47% and phosphofructokinase (PFK) by 56%. However, activities of other enzymes in brain as well as enzymes in liver and white muscle were largely unchanged or decreased during experimental hypoxia. Glycolytic enzymes in all three tissues were partitioned between soluble and particulate-bound forms. In several cases, the percentage of bound enzymes was reduced during hypoxia; bound aldolase in brain was reduced from 36.4 to 30.3% whereas glucose-6-phosphate dehydrogenase fell from 55.7 to 28.7% bound. In muscle PFK was reduced from 57.4 to 41.7% bound. Oppositely, the proportion of bound aldolase and triosephosphate isomerase increased in hypoxic muscle. Phosphoglucomutase did not appear to occur in a bound form in liver and bound phosphoglucomutase disappeared in muscle during hypoxia exposure. Anoxia exposure also led to the disappearance of bound fructose-1,6-bisphosphatase in liver, whereas a bound fraction of this enzyme appeared in white muscle of anoxic animals. The possible function of reversible binding of glycolytic enzymes to subcellular structures as a regulatory mechanism of carbohydrate metabolism is discussed.
Resumo:
The interconnections of customer loyalty, employee engagement and business performance have been separately examined in several previous studies but actually a coherent study combining all of these components together has been lacking. This thesis aims to study all of these components and their interrelations at the same time in order to understand the organization as a one whole. The thesis includes an encompassing review of the previous studies related to customer loyalty and employee engagement. The theory presents both the theoretical approaches and the empirical findings from the earlier literature and builds therefore a strong fundament for the empirical part of this thesis. The empirical data in this thesis was provided by three case companies of a Nordic group operating in a business-to-business professional services branch and it used the Net Promoter Score method for measuring both customer loyalty and employee engagement. The thesis left interesting research questions open and provides therefore an intriguing study field for the future researches.
Resumo:
Die zunehmende Vernetzung der Informations- und Kommunikationssysteme führt zu einer weiteren Erhöhung der Komplexität und damit auch zu einer weiteren Zunahme von Sicherheitslücken. Klassische Schutzmechanismen wie Firewall-Systeme und Anti-Malware-Lösungen bieten schon lange keinen Schutz mehr vor Eindringversuchen in IT-Infrastrukturen. Als ein sehr wirkungsvolles Instrument zum Schutz gegenüber Cyber-Attacken haben sich hierbei die Intrusion Detection Systeme (IDS) etabliert. Solche Systeme sammeln und analysieren Informationen von Netzwerkkomponenten und Rechnern, um ungewöhnliches Verhalten und Sicherheitsverletzungen automatisiert festzustellen. Während signatur-basierte Ansätze nur bereits bekannte Angriffsmuster detektieren können, sind anomalie-basierte IDS auch in der Lage, neue bisher unbekannte Angriffe (Zero-Day-Attacks) frühzeitig zu erkennen. Das Kernproblem von Intrusion Detection Systeme besteht jedoch in der optimalen Verarbeitung der gewaltigen Netzdaten und der Entwicklung eines in Echtzeit arbeitenden adaptiven Erkennungsmodells. Um diese Herausforderungen lösen zu können, stellt diese Dissertation ein Framework bereit, das aus zwei Hauptteilen besteht. Der erste Teil, OptiFilter genannt, verwendet ein dynamisches "Queuing Concept", um die zahlreich anfallenden Netzdaten weiter zu verarbeiten, baut fortlaufend Netzverbindungen auf, und exportiert strukturierte Input-Daten für das IDS. Den zweiten Teil stellt ein adaptiver Klassifikator dar, der ein Klassifikator-Modell basierend auf "Enhanced Growing Hierarchical Self Organizing Map" (EGHSOM), ein Modell für Netzwerk Normalzustand (NNB) und ein "Update Model" umfasst. In dem OptiFilter werden Tcpdump und SNMP traps benutzt, um die Netzwerkpakete und Hostereignisse fortlaufend zu aggregieren. Diese aggregierten Netzwerkpackete und Hostereignisse werden weiter analysiert und in Verbindungsvektoren umgewandelt. Zur Verbesserung der Erkennungsrate des adaptiven Klassifikators wird das künstliche neuronale Netz GHSOM intensiv untersucht und wesentlich weiterentwickelt. In dieser Dissertation werden unterschiedliche Ansätze vorgeschlagen und diskutiert. So wird eine classification-confidence margin threshold definiert, um die unbekannten bösartigen Verbindungen aufzudecken, die Stabilität der Wachstumstopologie durch neuartige Ansätze für die Initialisierung der Gewichtvektoren und durch die Stärkung der Winner Neuronen erhöht, und ein selbst-adaptives Verfahren eingeführt, um das Modell ständig aktualisieren zu können. Darüber hinaus besteht die Hauptaufgabe des NNB-Modells in der weiteren Untersuchung der erkannten unbekannten Verbindungen von der EGHSOM und der Überprüfung, ob sie normal sind. Jedoch, ändern sich die Netzverkehrsdaten wegen des Concept drif Phänomens ständig, was in Echtzeit zur Erzeugung nicht stationärer Netzdaten führt. Dieses Phänomen wird von dem Update-Modell besser kontrolliert. Das EGHSOM-Modell kann die neuen Anomalien effektiv erkennen und das NNB-Model passt die Änderungen in Netzdaten optimal an. Bei den experimentellen Untersuchungen hat das Framework erfolgversprechende Ergebnisse gezeigt. Im ersten Experiment wurde das Framework in Offline-Betriebsmodus evaluiert. Der OptiFilter wurde mit offline-, synthetischen- und realistischen Daten ausgewertet. Der adaptive Klassifikator wurde mit dem 10-Fold Cross Validation Verfahren evaluiert, um dessen Genauigkeit abzuschätzen. Im zweiten Experiment wurde das Framework auf einer 1 bis 10 GB Netzwerkstrecke installiert und im Online-Betriebsmodus in Echtzeit ausgewertet. Der OptiFilter hat erfolgreich die gewaltige Menge von Netzdaten in die strukturierten Verbindungsvektoren umgewandelt und der adaptive Klassifikator hat sie präzise klassifiziert. Die Vergleichsstudie zwischen dem entwickelten Framework und anderen bekannten IDS-Ansätzen zeigt, dass der vorgeschlagene IDSFramework alle anderen Ansätze übertrifft. Dies lässt sich auf folgende Kernpunkte zurückführen: Bearbeitung der gesammelten Netzdaten, Erreichung der besten Performanz (wie die Gesamtgenauigkeit), Detektieren unbekannter Verbindungen und Entwicklung des in Echtzeit arbeitenden Erkennungsmodells von Eindringversuchen.
Resumo:
Using the MIT Serial Link Direct Drive Arm as the main experimental device, various issues in trajectory and force control of manipulators were studied in this thesis. Since accurate modeling is important for any controller, issues of estimating the dynamic model of a manipulator and its load were addressed first. Practical and effective algorithms were developed fro the Newton-Euler equations to estimate the inertial parameters of manipulator rigid-body loads and links. Load estimation was implemented both on PUMA 600 robot and on the MIT Serial Link Direct Drive Arm. With the link estimation algorithm, the inertial parameters of the direct drive arm were obtained. For both load and link estimation results, the estimated parameters are good models of the actual system for control purposes since torques and forces can be predicted accurately from these estimated parameters. The estimated model of the direct drive arm was them used to evaluate trajectory following performance by feedforward and computed torque control algorithms. The experimental evaluations showed that the dynamic compensation can greatly improve trajectory following accuracy. Various stability issues of force control were studied next. It was determined that there are two types of instability in force control. Dynamic instability, present in all of the previous force control algorithms discussed in this thesis, is caused by the interaction of a manipulator with a stiff environment. Kinematics instability is present only in the hybrid control algorithm of Raibert and Craig, and is caused by the interaction of the inertia matrix with the Jacobian inverse coordinate transformation in the feedback path. Several methods were suggested and demonstrated experimentally to solve these stability problems. The result of the stability analyses were then incorporated in implementing a stable force/position controller on the direct drive arm by the modified resolved acceleration method using both joint torque and wrist force sensor feedbacks.
Resumo:
In this paper a precorrected FFT-Fast Multipole Tree (pFFT-FMT) method for solving the potential flow around arbitrary three dimensional bodies is presented. The method takes advantage of the efficiency of the pFFT and FMT algorithms to facilitate more demanding computations such as automatic wake generation and hands-off steady and unsteady aerodynamic simulations. The velocity potential on the body surfaces and in the domain is determined using a pFFT Boundary Element Method (BEM) approach based on the Green’s Theorem Boundary Integral Equation. The vorticity trailing all lifting surfaces in the domain is represented using a Fast Multipole Tree, time advected, vortex participle method. Some simple steady state flow solutions are performed to demonstrate the basic capabilities of the solver. Although this paper focuses primarily on steady state solutions, it should be noted that this approach is designed to be a robust and efficient unsteady potential flow simulation tool, useful for rapid computational prototyping.
Resumo:
We compare a broad range of optimal product line design methods. The comparisons take advantage of recent advances that make it possible to identify the optimal solution to problems that are too large for complete enumeration. Several of the methods perform surprisingly well, including Simulated Annealing, Product-Swapping and Genetic Algorithms. The Product-Swapping heuristic is remarkable for its simplicity. The performance of this heuristic suggests that the optimal product line design problem may be far easier to solve in practice than indicated by complexity theory.
Resumo:
Applications such as neuroscience, telecommunication, online social networking, transport and retail trading give rise to connectivity patterns that change over time. In this work, we address the resulting need for network models and computational algorithms that deal with dynamic links. We introduce a new class of evolving range-dependent random graphs that gives a tractable framework for modelling and simulation. We develop a spectral algorithm for calibrating a set of edge ranges from a sequence of network snapshots and give a proof of principle illustration on some neuroscience data. We also show how the model can be used computationally and analytically to investigate the scenario where an evolutionary process, such as an epidemic, takes place on an evolving network. This allows us to study the cumulative effect of two distinct types of dynamics.
Resumo:
The combination of radar and lidar in space offers the unique potential to retrieve vertical profiles of ice water content and particle size globally, and two algorithms developed recently claim to have overcome the principal difficulty with this approach-that of correcting the lidar signal for extinction. In this paper "blind tests" of these algorithms are carried out, using realistic 94-GHz radar and 355-nm lidar backscatter profiles simulated from aircraft-measured size spectra, and including the effects of molecular scattering, multiple scattering, and instrument noise. Radiation calculations are performed on the true and retrieved microphysical profiles to estimate the accuracy with which radiative flux profiles could be inferred remotely. It is found that the visible extinction profile can be retrieved independent of assumptions on the nature of the size distribution, the habit of the particles, the mean extinction-to-backscatter ratio, or errors in instrument calibration. Local errors in retrieved extinction can occur in proportion to local fluctuations in the extinction-to-backscatter ratio, but down to 400 m above the height of the lowest lidar return, optical depth is typically retrieved to better than 0.2. Retrieval uncertainties are greater at the far end of the profile, and errors in total optical depth can exceed 1, which changes the shortwave radiative effect of the cloud by around 20%. Longwave fluxes are much less sensitive to errors in total optical depth, and may generally be calculated to better than 2 W m(-2) throughout the profile. It is important for retrieval algorithms to account for the effects of lidar multiple scattering, because if this is neglected, then optical depth is underestimated by approximately 35%, resulting in cloud radiative effects being underestimated by around 30% in the shortwave and 15% in the longwave. Unlike the extinction coefficient, the inferred ice water content and particle size can vary by 30%, depending on the assumed mass-size relationship (a problem common to all remote retrieval algorithms). However, radiative fluxes are almost completely determined by the extinction profile, and if this is correct, then errors in these other parameters have only a small effect in the shortwave (around 6%, compared to that of clear sky) and a negligible effect in the longwave.
Resumo:
Calliandra calothyrsus is a tree legume native to Mexico and Central America. The species has attracted considerable attention for its capacity to produce both fuelwood and foliage for either green manure or fodder. Its high content of proanthocyanidins (condensed tannins) and associated low digestibility has, however, limited its use as a feed for ruminants, and there is also a widespread perception that wilting the leaves further reduces their nutritive value. Nevertheless, there has been increasing uptake of calliandra as fodder in certain regions, notably the Central Highlands of Kenya. The present study, conducted in Embu, Kenya, investigated effects of provenance, wilting, cutting frequency and seasonal variation both in the laboratory (in vitro digestibility, crude protein, neutral detergent fibre, extractable and bound proanthocyanidins) and in on-station animal production trials with growing lambs and lactating goats. The local Kenyan landrace of calliandra (Embu) and a closely-related Guatemalan provenance (Patulul) were found to be significantly different, and superior, to a provenance from Nicaragua (San Ramon) in most of the laboratory traits measured, as well as in animal production and feed efficiency. Cutting frequency had no important effect on quality; and although all quality traits displayed seasonal variation there was little discernible pattern to this variation. Wilting had a much less negative effect than expected, and for lambs fed calliandra as a supplement to a low quality basal feed (maize stover), wilting was actually found to give higher live-weight gain and feed efficiency. Conversely, with a high quality basal diet (Napier grass) wilting enhanced intake but not live-weight gain, so feed efficiency was greater for fresh material. The difference between fresh and wilted leaves was not great enough to justify the current widespread recommendation that calliandra should always be fed fresh.
Resumo:
Proteomic analysis using electrospray liquid chromatography-mass spectrometry (ESI-LC-MS) has been used to compare the sites of glycation (Amadori adduct formation) and carboxymethylation of RNase and to assess the role of the Amadori adduct in the formation of the advanced glycation end-product (AGE), N-is an element of-(carboxymethyl)lysine (CIVIL). RNase (13.7 mg/mL, 1 mM) was incubated with glucose (0.4 M) at 37 degreesC for 14 days in phosphate buffer (0.2 M, pH 7.4) under air. On the basis of ESI-LC-MS of tryptic peptides, the major sites of glycation of RNase were, in order, K41, K7, K1, and K37. Three of these, in order, K41, K7, and K37 were also the major sites of CIVIL formation. In other experiments, RNase was incubated under anaerobic conditions (1 mM DTPA, N-2 purged) to form Amadori-modified protein, which was then incubated under aerobic conditions to allow AGE formation. Again, the major sites of glycation were, in order, K41, K7, K1, and K37 and the major sites of carboxymethylation were K41, K7, and K37. RNase was also incubated with 1-5 mM glyoxal, substantially more than is formed by autoxidation of glucose under experimental conditions, but there was only trace modification of lysine residues, primarily at K41. We conclude the following: (1) that the primary route to formation of CIVIL is by autoxidation of Amadori adducts on protein, rather than by glyoxal generated on autoxidation of glucose; and (2) that carboxymethylation, like glycation, is a site-specific modification of protein affected by neighboring amino acids and bound ligands, such as phosphate or phosphorylated compounds. Even when the overall extent of protein modification is low, localization of a high proportion of the modifications at a few reactive sites might have important implications for understanding losses in protein functionality in aging and diabetes and also for the design of AGE inhibitors.
Resumo:
Discussions on banking reforms to reduce financial exclusion have referred little to possible attitudinal constraints, on the part of staff at both branch and institutional levels, inhibiting the provision of financial services to the poor. The research project, funded by the ESCOR (now Social Science Research) Small Grants Committee, has focused on this aspect of financial exclusion. The research commenced in May 2001 and was completed in April 2002. Profiles of the rural bank branch managers, including personal background, professional background and workplace, are presented. Attitudes of managers toward aspects of their work environment and the rural poor are examined, using results from both quantitative and qualitative analysis. Finally, the emerging policy implications are discussed. These include bank reforms to address human resource management, the work environment, intermediate bank management and organization, and the client interface.
Resumo:
With the latest advances in the area of advanced computer architectures we are seeing already large scale machines at petascale level and we are discussing exascale computing. All these require efficient scalable algorithms in order to bridge the performance gap. In this paper examples of various approaches of designing scalable algorithms for such advanced architectures will be given and the corresponding properties of these algorithms will be outlined and discussed. Examples will outline such scalable algorithms applied to large scale problems in the area Computational Biology, Environmental Modelling etc. The key properties of such advanced and scalable algorithms will be outlined.