997 resultados para Scalability study
Resumo:
Many real-world optimization problems contain multiple (often conflicting) goals to be optimized concurrently, commonly referred to as multi-objective problems (MOPs). Over the past few decades, a plethora of multi-objective algorithms have been proposed, often tested on MOPs possessing two or three objectives. Unfortunately, when tasked with solving MOPs with four or more objectives, referred to as many-objective problems (MaOPs), a large majority of optimizers experience significant performance degradation. The downfall of these optimizers is that simultaneously maintaining a well-spread set of solutions along with appropriate selection pressure to converge becomes difficult as the number of objectives increase. This difficulty is further compounded for large-scale MaOPs, i.e., MaOPs possessing large amounts of decision variables. In this thesis, we explore the challenges of many-objective optimization and propose three new promising algorithms designed to efficiently solve MaOPs. Experimental results demonstrate the proposed optimizers to perform very well, often outperforming state-of-the-art many-objective algorithms.
Resumo:
Current computer systems have evolved from featuring only a single processing unit and limited RAM, in the order of kilobytes or few megabytes, to include several multicore processors, o↵ering in the order of several tens of concurrent execution contexts, and have main memory in the order of several tens to hundreds of gigabytes. This allows to keep all data of many applications in the main memory, leading to the development of inmemory databases. Compared to disk-backed databases, in-memory databases (IMDBs) are expected to provide better performance by incurring in less I/O overhead. In this dissertation, we present a scalability study of two general purpose IMDBs on multicore systems. The results show that current general purpose IMDBs do not scale on multicores, due to contention among threads running concurrent transactions. In this work, we explore di↵erent direction to overcome the scalability issues of IMDBs in multicores, while enforcing strong isolation semantics. First, we present a solution that requires no modification to either database systems or to the applications, called MacroDB. MacroDB replicates the database among several engines, using a master-slave replication scheme, where update transactions execute on the master, while read-only transactions execute on slaves. This reduces contention, allowing MacroDB to o↵er scalable performance under read-only workloads, while updateintensive workloads su↵er from performance loss, when compared to the standalone engine. Second, we delve into the database engine and identify the concurrency control mechanism used by the storage sub-component as a scalability bottleneck. We then propose a new locking scheme that allows the removal of such mechanisms from the storage sub-component. This modification o↵ers performance improvement under all workloads, when compared to the standalone engine, while scalability is limited to read-only workloads. Next we addressed the scalability limitations for update-intensive workloads, and propose the reduction of locking granularity from the table level to the attribute level. This further improved performance for intensive and moderate update workloads, at a slight cost for read-only workloads. Scalability is limited to intensive-read and read-only workloads. Finally, we investigate the impact applications have on the performance of database systems, by studying how operation order inside transactions influences the database performance. We then propose a Read before Write (RbW) interaction pattern, under which transaction perform all read operations before executing write operations. The RbW pattern allowed TPC-C to achieve scalable performance on our modified engine for all workloads. Additionally, the RbW pattern allowed our modified engine to achieve scalable performance on multicores, almost up to the total number of cores, while enforcing strong isolation.
Resumo:
The focus of our work is the verification of tight functional properties of numerical programs, such as showing that a floating-point implementation of Riemann integration computes a close approximation of the exact integral. Programmers and engineers writing such programs will benefit from verification tools that support an expressive specification language and that are highly automated. Our work provides a new method for verification of numerical software, supporting a substantially more expressive language for specifications than other publicly available automated tools. The additional expressivity in the specification language is provided by two constructs. First, the specification can feature inclusions between interval arithmetic expressions. Second, the integral operator from classical analysis can be used in the specifications, where the integration bounds can be arbitrary expressions over real variables. To support our claim of expressivity, we outline the verification of four example programs, including the integration example mentioned earlier. A key component of our method is an algorithm for proving numerical theorems. This algorithm is based on automatic polynomial approximation of non-linear real and real-interval functions defined by expressions. The PolyPaver tool is our implementation of the algorithm and its source code is publicly available. In this paper we report on experiments using PolyPaver that indicate that the additional expressivity does not come at a performance cost when comparing with other publicly available state-of-the-art provers. We also include a scalability study that explores the limits of PolyPaver in proving tight functional specifications of progressively larger randomly generated programs. © 2014 Springer International Publishing Switzerland.
Resumo:
The results shown in this thesis are based on selected publications of the 2000s decade. The work was carried out in several national and EC funded public research projects and in close cooperation with industrial partners. The main objective of the thesis was to study and quantify the most important phenomena of circulating fluidized bed combustors by developing and applying proper experimental and modelling methods using laboratory scale equipments. An understanding of the phenomena plays an essential role in the development of combustion and emission performance, and the availability and controls of CFB boilers. Experimental procedures to study fuel combustion behaviour under CFB conditions are presented in the thesis. Steady state and dynamic measurements under well controlled conditions were carried out to produce the data needed for the development of high efficiency, utility scale CFB technology. The importance of combustion control and furnace dynamics is emphasized when CFB boilers are scaled up with a once through steam cycle. Qualitative information on fuel combustion characteristics was obtained directly by comparing flue gas oxygen responses during the impulse change experiments with fuel feed. A one-dimensional, time dependent model was developed to analyse the measurement data Emission formation was studied combined with fuel combustion behaviour. Correlations were developed for NO, N2O, CO and char loading, as a function of temperature and oxygen concentration in the bed area. An online method to characterize char loading under CFB conditions was developed and validated with the pilot scale CFB tests. Finally, a new method to control air and fuel feeds in CFB combustion was introduced. The method is based on models and an analysis of the fluctuation of the flue gas oxygen concentration. The effect of high oxygen concentrations on fuel combustion behaviour was also studied to evaluate the potential of CFB boilers to apply oxygenfiring technology to CCS. In future studies, it will be necessary to go through the whole scale up chain from laboratory phenomena devices through pilot scale test rigs to large scale, commercial boilers in order to validate the applicability and scalability of the, results. This thesis shows the chain between the laboratory scale phenomena test rig (bench scale) and the CFB process test rig (pilot). CFB technology has been scaled up successfully from an industrial scale to a utility scale during the last decade. The work shown in the thesis, for its part, has supported the development by producing new detailed information on combustion under CFB conditions.
Resumo:
O fenômeno das empresas born globals e a internacionalização de empresas brasileiras de base tecnológica são tópicos recentes na literatura acadêmica, devido também ao fenômeno ser recente. Não existem muitos estudados feitos com foco no mercado brasileiro, e os poucos que foram feitos, possuem um viés quantitativo. Esse estudo, entretanto, tem como objetivo analisar startups de maneira qualitativa. Uma extensa revisão de literatura foi desenvolvida a fim de melhor analisar as fundações nas quais o estudo seria desenvolvido, revisando os métodos de internacionalização, empreendedorismo no Brasil, e o fenômeno born global no geral. Entrevistas foram conduzidas com empreendedores no Brasil, que passaram pelo processo de internacionalização de seus modelos de negócios, a fim de reunir introspecções a respeito das peculiaridades do mercado brasileiro. Foram também analisados os fatores de escalabilidade de modelos de negócios dependentes de tecnologia, motivadores para a internacionalização, critério de seleção de mercados, programas governamentais, e o papel das startups brasileiras em uma perspectiva global.
Resumo:
Majority of biometric researchers focus on the accuracy of matching using biometrics databases, including iris databases, while the scalability and speed issues have been neglected. In the applications such as identification in airports and borders, it is critical for the identification system to have low-time response. In this paper, a graph-based framework for pattern recognition, called Optimum-Path Forest (OPF), is utilized as a classifier in a pre-developed iris recognition system. The aim of this paper is to verify the effectiveness of OPF in the field of iris recognition, and its performance for various scale iris databases. This paper investigates several classifiers, which are widely used in iris recognition papers, and the response time along with accuracy. The existing Gauss-Laguerre Wavelet based iris coding scheme, which shows perfect discrimination with rotary Hamming distance classifier, is used for iris coding. The performance of classifiers is compared using small, medium, and large scale databases. Such comparison shows that OPF has faster response for large scale database, thus performing better than more accurate but slower Bayesian classifier.
Resumo:
Though IP multicast is resource ef£cient in delivering data to a group of members simultaneously, it suffers from scalability problem with the number of concurrently active multicast groups because it requires a router to keep forwarding state for every multicast tree passing through it. To solve this state scalability problem, we proposed a scheme, called aggregated multicast. The key idea is that multiple groups are forced to share a single delivery tree. In our earlier work, we introduced the basic concept of aggregated multicast and presented some initial results to show that multicast state can be reduced. In this paper, we develop a more quantitative assessment of the cost/bene£t trade-offs. We propose an algorithm to assign multicast groups to delivery trees with controllable cost and introduce metrics to measure multicast state and tree management overhead for multicast schemes. We then compare aggregated multicast with conventional multicast schemes, such as source speci£c tree scheme and shared tree scheme. Our extensive simulations show that aggregated multicast can achieve signi£cant routing state and tree management overhead reduction while containing the expense of extra resources (bandwidth waste and tunnelling overhead). We conclude that aggregated multicast is a very cost-effective and promising direction for scalable transit domain multicast provisioning.
Resumo:
El principio de Teoría de Juegos permite desarrollar modelos estocásticos de patrullaje multi-robot para proteger infraestructuras criticas. La protección de infraestructuras criticas representa un gran reto para los países al rededor del mundo, principalmente después de los ataques terroristas llevados a cabo la década pasada. En este documento el termino infraestructura hace referencia a aeropuertos, plantas nucleares u otros instalaciones. El problema de patrullaje se define como la actividad de patrullar un entorno determinado para monitorear cualquier actividad o sensar algunas variables ambientales. En esta actividad, un grupo de robots debe visitar un conjunto de puntos de interés definidos en un entorno en intervalos de tiempo irregulares con propósitos de seguridad. Los modelos de partullaje multi-robot son utilizados para resolver este problema. Hasta el momento existen trabajos que resuelven este problema utilizando diversos principios matemáticos. Los modelos de patrullaje multi-robot desarrollados en esos trabajos representan un gran avance en este campo de investigación. Sin embargo, los modelos con los mejores resultados no son viables para aplicaciones de seguridad debido a su naturaleza centralizada y determinista. Esta tesis presenta cinco modelos de patrullaje multi-robot distribuidos e impredecibles basados en modelos matemáticos de aprendizaje de Teoría de Juegos. El objetivo del desarrollo de estos modelos está en resolver los inconvenientes presentes en trabajos preliminares. Con esta finalidad, el problema de patrullaje multi-robot se formuló utilizando conceptos de Teoría de Grafos, en la cual se definieron varios juegos en cada vértice de un grafo. Los modelos de patrullaje multi-robot desarrollados en este trabajo de investigación se han validado y comparado con los mejores modelos disponibles en la literatura. Para llevar a cabo tanto la validación como la comparación se ha utilizado un simulador de patrullaje y un grupo de robots reales. Los resultados experimentales muestran que los modelos de patrullaje desarrollados en este trabajo de investigación trabajan mejor que modelos de trabajos previos en el 80% de 150 casos de estudio. Además de esto, estos modelos cuentan con varias características importantes tales como distribución, robustez, escalabilidad y dinamismo. Los avances logrados con este trabajo de investigación dan evidencia del potencial de Teoría de Juegos para desarrollar modelos de patrullaje útiles para proteger infraestructuras. ABSTRACT Game theory principle allows to developing stochastic multi-robot patrolling models to protect critical infrastructures. Critical infrastructures protection is a great concern for countries around the world, mainly due to terrorist attacks in the last decade. In this document, the term infrastructures includes airports, nuclear power plants, and many other facilities. The patrolling problem is defined as the activity of traversing a given environment to monitoring any activity or sensing some environmental variables If this activity were performed by a fleet of robots, they would have to visit some places of interest of an environment at irregular intervals of time for security purposes. This problem is solved using multi-robot patrolling models. To date, literature works have been solved this problem applying various mathematical principles.The multi-robot patrolling models developed in those works represent great advances in this field. However, the models that obtain the best results are unfeasible for security applications due to their centralized and predictable nature. This thesis presents five distributed and unpredictable multi-robot patrolling models based on mathematical learning models derived from Game Theory. These multi-robot patrolling models aim at overcoming the disadvantages of previous work. To this end, the multi-robot patrolling problem was formulated using concepts of Graph Theory to represent the environment. Several normal-form games were defined at each vertex of a graph in this formulation. The multi-robot patrolling models developed in this research work have been validated and compared with best ranked multi-robot patrolling models in the literature. Both validation and comparison were preformed by using both a patrolling simulator and real robots. Experimental results show that the multirobot patrolling models developed in this research work improve previous ones in as many as 80% of 150 cases of study. Moreover, these multi-robot patrolling models rely on several features to highlight in security applications such as distribution, robustness, scalability, and dynamism. The achievements obtained in this research work validate the potential of Game Theory to develop patrolling models to protect infrastructures.
Resumo:
The paper provides evidence that spatial indexing structures offer faster resolution of Formal Concept Analysis queries than B-Tree/Hash methods. We show that many Formal Concept Analysis operations, computing the contingent and extent sizes as well as listing the matching objects, enjoy improved performance with the use of spatial indexing structures such as the RD-Tree. Speed improvements can vary up to eighty times faster depending on the data and query. The motivation for our study is the application of Formal Concept Analysis to Semantic File Systems. In such applications millions of formal objects must be dealt with. It has been found that spatial indexing also provides an effective indexing technique for more general purpose applications requiring scalability in Formal Concept Analysis systems. The coverage and benchmarking are presented with general applications in mind.
Resumo:
Background aims: The selection of medium and associated reagents for human mesenchymal stromal cell (hMSC) culture forms an integral part of manufacturing process development and must be suitable for multiple process scales and expansion technologies. Methods: In this work, we have expanded BM-hMSCs in fetal bovine serum (FBS)- and human platelet lysate (HPL)-containing media in both a monolayer and a suspension-based microcarrier process. Results: The introduction of HPL into the monolayer process increased the BM-hMSC growth rate at the first experimental passage by 0.049 day and 0.127/day for the two BM-hMSC donors compared with the FBS-based monolayer process. This increase in growth rate in HPL-containing medium was associated with an increase in the inter-donor consistency, with an inter-donor range of 0.406 cumulative population doublings after 18 days compared with 2.013 in FBS-containing medium. Identity and quality characteristics of the BM-hMSCs are also comparable between conditions in terms of colony-forming potential, osteogenic potential and expression of key genes during monolayer and post-harvest from microcarrier expansion. BM-hMSCs cultured on microcarriers in HPL-containing medium demonstrated a reduction in the initial lag phase for both BM-hMSC donors and an increased BM-hMSC yield after 6 days of culture to 1.20 ± 0.17 × 105 and 1.02 ± 0.005 × 105 cells/mL compared with 0.79 ± 0.05 × 105 and 0.36 ± 0.04 × 105 cells/mL in FBS-containing medium. Conclusions: This study has demonstrated that HPL, compared with FBS-containing medium, delivers increased growth and comparability across two BM-hMSC donors between monolayer and microcarrier culture, which will have key implications for process transfer during scale-up.
Resumo:
This research examined the factors contributing to the performance of online grocers prior to, and following, the 2000 dot.com collapse. The primary goals were to assess the relationship between a company’s business model(s) and its performance in the online grocery channel and to determine if there were other company and/or market related factors that could account for company performance. ^ To assess the primary goals, a case based theory building process was utilized. A three-way cross-case analysis comprising Peapod, GroceryWorks, and Tesco examined the common profit components, the structural category (e.g., pure-play, partnership, and hybrid) profit components, and the idiosyncratic profit components related to each specific company. ^ Based on the analysis, it was determined that online grocery store business models could be represented at three distinct, but hierarchically, related levels. The first level was termed the core model and represented the basic profit structure that all online grocers needed in order to conduct operations. The next model level was termed the structural model and represented the profit structure associated with the specific business model configuration (i.e., pure-play, partnership, hybrid). The last model level was termed the augmented model and represented the company’s business model when idiosyncratic profit components were included. In relation to the five company related factors, scalability, rate of expansion, and the automation level were potential candidates for helping to explain online grocer performance. In addition, all the market structure related factors were deemed possible candidates for helping to explain online grocer performance. ^ The study concluded by positing an alternative hypothesis concerning the performance of online grocers. Prior to this study, the prevailing wisdom was that the business models were the primary cause of online grocer performance. However, based on the core model analysis, it was hypothesized that the customer relationship activities (i.e., advertising, promotions, and loyalty program tie-ins) were the real drivers of online grocer performance. ^
Resumo:
This research examined the factors contributing to the performance of online grocers prior to, and following, the 2000 dot.com collapse. The primary goals were to assess the relationship between a company’s business model(s) and its performance in the online grocery channel and to determine if there were other company and/or market related factors that could account for company performance. To assess the primary goals, a case based theory building process was utilized. A three-way cross-case analysis comprising Peapod, GroceryWorks, and Tesco examined the common profit components, the structural category (e.g., pure-play, partnership, and hybrid) profit components, and the idiosyncratic profit components related to each specific company. Based on the analysis, it was determined that online grocery store business models could be represented at three distinct, but hierarchically, related levels. The first level was termed the core model and represented the basic profit structure that all online grocers needed in order to conduct operations. The next model level was termed the structural model and represented the profit structure associated with the specific business model configuration (i.e., pure-play, partnership, hybrid). The last model level was termed the augmented model and represented the company’s business model when idiosyncratic profit components were included. In relation to the five company related factors, scalability, rate of expansion, and the automation level were potential candidates for helping to explain online grocer performance. In addition, all the market structure related factors were deemed possible candidates for helping to explain online grocer performance. The study concluded by positing an alternative hypothesis concerning the performance of online grocers. Prior to this study, the prevailing wisdom was that the business models were the primary cause of online grocer performance. However, based on the core model analysis, it was hypothesized that the customer relationship activities (i.e., advertising, promotions, and loyalty program tie-ins) were the real drivers of online grocer performance.
Resumo:
Among the various ways of adopting the biographical approach, we used the curriculum vitaes (CVs) of Brazilian researchers who work as social scientists in health as our research material. These CVs are part of the Lattes Platform of CNPq - the National Council for Scientific and Technological Development, which includes Research and Institutional Directories. We analyzed 238 CVs for this study. The CVs contain, among other things, the following information: professional qualifications, activities and projects, academic production, participation in panels for the evaluation of theses and dissertations, research centers and laboratories and a summarized autobiography. In this work there is a brief review of the importance of autobiography for the social sciences, emphasizing the CV as a form of autobiographical practice. We highlight some results, such as it being a group consisting predominantly of women, graduates in social sciences, anthropology, sociology or political science, with postgraduate degrees. The highest concentration of social scientists is located in Brazil's southern and southeastern regions. In some institutions the main activities of social scientists are as teachers and researchers with great thematic diversity in research.
Resumo:
The micellization of a homologous series of zwitterionic surfactants, a group of sulfobetaines, was studied using isothermal titration calorimetry (ITC) in the temperature range from 15 to 65 °C. The increase in both temperature and the alkyl chain length leads to more negative values of ΔGmic(0) , favoring the micellization. The entropic term (ΔSmic(0)) is predominant at lower temperatures, and above ca. 55-65 °C, the enthalpic term (ΔHmic(0)) becomes prevalent, figuring a jointly driven process as the temperature increases. The interaction of these sulfobetaines with different polymers was also studied by ITC. Among the polymers studied, only two induced the formation of micellar aggregates at lower surfactant concentration: poly(acrylic acid), PAA, probably due to the formation of hydrogen bonds between the carboxylic group of the polymer and the sulfonate group of the surfactant, and poly(sodium 4-styrenesulfonate), PSS, probably due to the incorporation of the hydrophobic styrene group into the micelles. The prevalence of the hydrophobic and not the electrostatic contributions to the interaction between sulfobetaine and PSS was confirmed by an increased interaction enthalpy in the presence of electrolytes (NaCl) and by the observation of a significant temperature dependence, the latter consistent with the proposed removal of hydrophobic groups from water.
Resumo:
Maternal mortality (MM) is a core indicator of disparities in women's rights. The study of Near Miss cases is strategic to identifying the breakdowns in obstetrical care. In absolute numbers, both MM and occurrence of eclampsia are rare events. We aim to assess the obstetric care indicators and main predictors for severe maternal outcome from eclampsia (SMO: maternal death plus maternal near miss). Secondary analysis of a multicenter, cross-sectional study, including 27 centers from all geographic regions of Brazil, from 2009 to 2010. 426 cases of eclampsia were identified and classified according to the outcomes: SMO and non-SMO. We classified facilities as coming from low- and high-income regions and calculated the WHO's obstetric health indicators. SPSS and Stata softwares were used to calculate the prevalence ratios (PR) and respective 95% confidence interval (CI) to assess maternal characteristics, clinical and obstetrical history, and access to health services as predictors for SMO, subsequently correlating them with the corresponding perinatal outcomes, also applying multiple regression analysis (adjusted for cluster effect). Prevalence of and mortality indexes for eclampsia in higher and lower income regions were 0.2%/0.8% and 8.1%/22%, respectively. Difficulties in access to health care showed that ICU admission (adjPR 3.61; 95% CI 1.77-7.35) and inadequate monitoring (adjPR 2.31; 95% CI 1.48-3.59) were associated with SMO. Morbidity and mortality associated with eclampsia were high in Brazil, especially in lower income regions. Promoting quality maternal health care and improving the availability of obstetric emergency care are essential actions to relieve the burden of eclampsia.