52 resultados para p-median problem
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
In Brazil, sugarcane fields are often burned to facilitate manual harvesting, and this burning causes environmental pollution from the large amounts of soot released into the atmosphere. This material contains numerous organic compounds such as PAHs. In this study, the concentrations of PAHs in two particulate-matter fractions (PM(2.5) and PM(10)) in the city of Araraquara (SE Brazil, with around 200,000 inhabitants and surrounded by sugarcane plantations) were determined during the sugarcane harvest (HV) and non-harvest (NHV) seasons in 2008 and 2009. The sampling strategy included four campaigns, with 60 samples in the NHV season and 220 samples in the HV season. The PM(2.5) and PM(10) fractions were collected using a dichotomous sampler (10 L min(-1), 24 h) with Teflon (TM) filters. The filter sets were extracted (ultrasonic bath with hexane/acetone (1:1 v/v)) and analyzed by HPLC/Fluorescence. The median concentration for total PAHs (PM(2.5) in 2009) was 0.99 ng m(-3) (NHV) and 3.3 ng m(-3) (HV). In the HV season, the total concentration of carcinogenic PAHs (benz(a)anthracene, benzo(b)fluoranthene, benzo(k)fluoranthene, and benzo(a)pyrene) was 5 times higher than in the NHV season. B(a)P median concentrations were 0.017 ng m(-3) and 0.12 ng m(-3) for the NHV and HV seasons, respectively. The potential cancer risk associated with exposure through inhalation of these compounds was estimated based on the benzo[a]pyrene toxic equivalence (BaP(eq)), where the overall toxicity of a PAR mixture is defined by the concentration of each compound multiplied by its relative toxic equivalence factor (TEF). BaP(eq) median (2008 and 2009 years) ranged between 0.65 and 1.0 ng m(-3) and 1.2-1.4 ng m(-3) for the NHV and HV seasons, respectively. Considering that the maximum permissible BaPeq in ambient air is 1 ng m(-3), related to the increased carcinogenic risk, our data suggest that the level of human exposure to PAHs in cities surrounded by sugarcane crops where the burning process is used is cause for concern. (C) 2010 Published by Elsevier Ltd.
Resumo:
This paper addresses the capacitated lot sizing problem (CLSP) with a single stage composed of multiple plants, items and periods with setup carry-over among the periods. The CLSP is well studied and many heuristics have been proposed to solve it. Nevertheless, few researches explored the multi-plant capacitated lot sizing problem (MPCLSP), which means that few solution methods were proposed to solve it. Furthermore, to our knowledge, no study of the MPCLSP with setup carry-over was found in the literature. This paper presents a mathematical model and a GRASP (Greedy Randomized Adaptive Search Procedure) with path relinking to the MPCLSP with setup carry-over. This solution method is an extension and adaptation of a previously adopted methodology without the setup carry-over. Computational tests showed that the improvement of the setup carry-over is significant in terms of the solution value with a low increase in computational time.
Resumo:
OBJETIVO: Avaliar as concentrações séricas de retinol e beta-caroteno de pré-escolares em Teresina, Piauí, com caracterização do perfil antropométrico e do consumo alimentar. MATERIAL E MÉTODOS: Estudo transversal envolvendo 135 crianças em creche municipal, com avaliação do estado nutricional pelos métodos: bioquímico (concentração sérica de retinol e beta-caroteno), antropométrico (índices de peso para estatura - P/E e estatura para idade - E/I) e dietético (freqüência de consumo alimentar). RESULTADOS: Observou-se prevalência de deficiência de vitamina A (DVA) de 8,9% (IC95%: 4,7 - 15,0%) e existência de associação entre suplementação anterior e concentrações de retinol, com maior proporção de crianças com níveis normais de retinol entre as suplementadas (p = 0,025). As concentrações de retinol e de beta-caroteno mostraram-se correlacionadas, porém com força leve a moderada (p < 0,021). Os percentuais de crianças com baixo P/E e de baixa E/I foram de 1,9% (IC95%: 0,2 - 6,8%) e 9,7% (IC95%: 4,8 - 17,1%), respectivamente. Na avaliação dietética verificou-se baixo consumo de alimentos ricos em vitamina A. CONCLUSÕES: A elevada prevalência de DVA nas crianças, combinada com a alta percentagem de crianças com valores aceitáveis de retinol, os baixos valores medianos de beta-caroteno, a alta percentagem de déficit estatural e a inadequação do consumo de alimentos ricos em vitamina A, indicam a necessidade de se aprimorar as estratégias de educação em saúde e nutrição da população, incentivando o consumo de alimentos fontes de vitamina A, como medidas auto-sustentáveis importantes no combate ao problema. Além disso, deve ser considerado o incentivo à fortificação dos alimentos e ao fortalecimento de Programas de suplementação.
Resumo:
Efficient automatic protein classification is of central importance in genomic annotation. As an independent way to check the reliability of the classification, we propose a statistical approach to test if two sets of protein domain sequences coming from two families of the Pfam database are significantly different. We model protein sequences as realizations of Variable Length Markov Chains (VLMC) and we use the context trees as a signature of each protein family. Our approach is based on a Kolmogorov-Smirnov-type goodness-of-fit test proposed by Balding et at. [Limit theorems for sequences of random trees (2008), DOI: 10.1007/s11749-008-0092-z]. The test statistic is a supremum over the space of trees of a function of the two samples; its computation grows, in principle, exponentially fast with the maximal number of nodes of the potential trees. We show how to transform this problem into a max-flow over a related graph which can be solved using a Ford-Fulkerson algorithm in polynomial time on that number. We apply the test to 10 randomly chosen protein domain families from the seed of Pfam-A database (high quality, manually curated families). The test shows that the distributions of context trees coming from different families are significantly different. We emphasize that this is a novel mathematical approach to validate the automatic clustering of sequences in any context. We also study the performance of the test via simulations on Galton-Watson related processes.
Resumo:
The width of a closed convex subset of n-dimensional Euclidean space is the distance between two parallel supporting hyperplanes. The Blaschke-Lebesgue problem consists of minimizing the volume in the class of convex sets of fixed constant width and is still open in dimension n >= 3. In this paper we describe a necessary condition that the minimizer of the Blaschke-Lebesgue must satisfy in dimension n = 3: we prove that the smooth components of the boundary of the minimizer have their smaller principal curvature constant and therefore are either spherical caps or pieces of tubes (canal surfaces).
Resumo:
The first problem of the Seleucid mathematical cuneiform tablet BM 34 568 calculates the diagonal of a rectangle from its sides without resorting to the Pythagorean rule. For this reason, it has been a source of discussion among specialists ever since its first publication. but so far no consensus in relation to its mathematical meaning has been attained. This paper presents two new interpretations of the scribe`s procedure. based on the assumption that he was able to reduce the problem to a standard Mesopotamian question about reciprocal numbers. These new interpretations are then linked to interpretations of the Old Babylonian tablet Plimpton 322 and to the presence of Pythagorean triples in the contexts of Old Babylonian and Hellenistic mathematics. (C) 2007 Elsevier Inc. All rights reserved.
Resumo:
Study design: Cross-sectional study. Objectives: To observe if there is a relationship between the level of injury by the American Spinal Cord Injury Association (ASIA) and cortical somatosensory evoked potential (SSEP) recordings of the median nerve in patients with quadriplegia. Setting: Rehabilitation Outpatient Clinic at the university hospital in Brazil. Methods: Fourteen individuals with quadriplegia and 8 healthy individuals were evaluated. Electrophysiological assessment of the median nerve was performed by evoked potential equipment. The injury level was obtained by ASIA. N(9), N(13) and N(20) were analyzed based on the presence or absence of responses. The parameters used for analyzing these responses were the latency and the amplitude. Data were analyzed using mixed-effect models. Results: N(9) responses were found in all patients with quadriplegia with a similar latency and amplitude observed in healthy individuals; N(13) responses were not found in any patients with quadriplegia. N(20) responses were not found in C5 patients with quadriplegia but it was present in C6 and C7 patients. Their latencies were similar to healthy individuals (P > 0.05) but the amplitudes were decreased (P < 0.05). Conclusion: This study suggests that the SSEP responses depend on the injury level, considering that the individuals with C6 and C7 injury levels, both complete and incomplete, presented SSEP recordings in the cortical area. It also showed a relationship between the level of spinal cord injury assessed by ASIA and the median nerve SSEP responses, through the latency and amplitude recordings. Spinal Cord (2009) 47, 372-378; doi:10.1038/sc.2008.147; published online 20 January 2009
Resumo:
We consider a class of two-dimensional problems in classical linear elasticity for which material overlapping occurs in the absence of singularities. Of course, material overlapping is not physically realistic, and one possible way to prevent it uses a constrained minimization theory. In this theory, a minimization problem consists of minimizing the total potential energy of a linear elastic body subject to the constraint that the deformation field must be locally invertible. Here, we use an interior and an exterior penalty formulation of the minimization problem together with both a standard finite element method and classical nonlinear programming techniques to compute the minimizers. We compare both formulations by solving a plane problem numerically in the context of the constrained minimization theory. The problem has a closed-form solution, which is used to validate the numerical results. This solution is regular everywhere, including the boundary. In particular, we show numerical results which indicate that, for a fixed finite element mesh, the sequences of numerical solutions obtained with both the interior and the exterior penalty formulations converge to the same limit function as the penalization is enforced. This limit function yields an approximate deformation field to the plane problem that is locally invertible at all points in the domain. As the mesh is refined, this field converges to the exact solution of the plane problem.
Resumo:
This paper addresses the time-variant reliability analysis of structures with random resistance or random system parameters. It deals with the problem of a random load process crossing a random barrier level. The implications of approximating the arrival rate of the first overload by an ensemble-crossing rate are studied. The error involved in this so-called ""ensemble-crossing rate"" approximation is described in terms of load process and barrier distribution parameters, and in terms of the number of load cycles. Existing results are reviewed, and significant improvements involving load process bandwidth, mean-crossing frequency and time are presented. The paper shows that the ensemble-crossing rate approximation can be accurate enough for problems where load process variance is large in comparison to barrier variance, but especially when the number of load cycles is small. This includes important practical applications like random vibration due to impact loadings and earthquake loading. Two application examples are presented, one involving earthquake loading and one involving a frame structure subject to wind and snow loadings. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
This paper addresses the non-preemptive single machine scheduling problem to minimize total tardiness. We are interested in the online version of this problem, where orders arrive at the system at random times. Jobs have to be scheduled without knowledge of what jobs will come afterwards. The processing times and the due dates become known when the order is placed. The order release date occurs only at the beginning of periodic intervals. A customized approximate dynamic programming method is introduced for this problem. The authors also present numerical experiments that assess the reliability of the new approach and show that it performs better than a myopic policy.
Resumo:
In this paper, we consider a real-life heterogeneous fleet vehicle routing problem with time windows and split deliveries that occurs in a major Brazilian retail group. A single depot attends 519 stores of the group distributed in 11 Brazilian states. To find good solutions to this problem, we propose heuristics as initial solutions and a scatter search (SS) approach. Next, the produced solutions are compared with the routes actually covered by the company. Our results show that the total distribution cost can be reduced significantly when such methods are used. Experimental testing with benchmark instances is used to assess the merit of our proposed procedure. (C) 2008 Published by Elsevier B.V.
Resumo:
In this paper, we devise a separation principle for the finite horizon quadratic optimal control problem of continuous-time Markovian jump linear systems driven by a Wiener process and with partial observations. We assume that the output variable and the jump parameters are available to the controller. It is desired to design a dynamic Markovian jump controller such that the closed loop system minimizes the quadratic functional cost of the system over a finite horizon period of time. As in the case with no jumps, we show that an optimal controller can be obtained from two coupled Riccati differential equations, one associated to the optimal control problem when the state variable is available, and the other one associated to the optimal filtering problem. This is a separation principle for the finite horizon quadratic optimal control problem for continuous-time Markovian jump linear systems. For the case in which the matrices are all time-invariant we analyze the asymptotic behavior of the solution of the derived interconnected Riccati differential equations to the solution of the associated set of coupled algebraic Riccati equations as well as the mean square stabilizing property of this limiting solution. When there is only one mode of operation our results coincide with the traditional ones for the LQG control of continuous-time linear systems.
Resumo:
We consider in this paper the optimal stationary dynamic linear filtering problem for continuous-time linear systems subject to Markovian jumps in the parameters (LSMJP) and additive noise (Wiener process). It is assumed that only an output of the system is available and therefore the values of the jump parameter are not accessible. It is a well known fact that in this setting the optimal nonlinear filter is infinite dimensional, which makes the linear filtering a natural numerically, treatable choice. The goal is to design a dynamic linear filter such that the closed loop system is mean square stable and minimizes the stationary expected value of the mean square estimation error. It is shown that an explicit analytical solution to this optimal filtering problem is obtained from the stationary solution associated to a certain Riccati equation. It is also shown that the problem can be formulated using a linear matrix inequalities (LMI) approach, which can be extended to consider convex polytopic uncertainties on the parameters of the possible modes of operation of the system and on the transition rate matrix of the Markov process. As far as the authors are aware of this is the first time that this stationary filtering problem (exact and robust versions) for LSMJP with no knowledge of the Markov jump parameters is considered in the literature. Finally, we illustrate the results with an example.
Resumo:
Hub-and-spoke networks are widely studied in the area of location theory. They arise in several contexts, including passenger airlines, postal and parcel delivery, and computer and telecommunication networks. Hub location problems usually involve three simultaneous decisions to be made: the optimal number of hub nodes, their locations and the allocation of the non-hub nodes to the hubs. In the uncapacitated single allocation hub location problem (USAHLP) hub nodes have no capacity constraints and non-hub nodes must be assigned to only one hub. In this paper, we propose three variants of a simple and efficient multi-start tabu search heuristic as well as a two-stage integrated tabu search heuristic to solve this problem. With multi-start heuristics, several different initial solutions are constructed and then improved by tabu search, while in the two-stage integrated heuristic tabu search is applied to improve both the locational and allocational part of the problem. Computational experiments using typical benchmark problems (Civil Aeronautics Board (CAB) and Australian Post (AP) data sets) as well as new and modified instances show that our approaches consistently return the optimal or best-known results in very short CPU times, thus allowing the possibility of efficiently solving larger instances of the USAHLP than those found in the literature. We also report the integer optimal solutions for all 80 CAB data set instances and the 12 AP instances up to 100 nodes, as well as for the corresponding new generated AP instances with reduced fixed costs. Published by Elsevier Ltd.
Resumo:
This paper addresses the single machine scheduling problem with a common due date aiming to minimize earliness and tardiness penalties. Due to its complexity, most of the previous studies in the literature deal with this problem using heuristics and metaheuristics approaches. With the intention of contributing to the study of this problem, a branch-and-bound algorithm is proposed. Lower bounds and pruning rules that exploit properties of the problem are introduced. The proposed approach is examined through a computational comparative study with 280 problems involving different due date scenarios. In addition, the values of optimal solutions for small problems from a known benchmark are provided.