907 resultados para Unconstrained minimization
Resumo:
Aplicació d'una DAOM (Diagnosi Ambiental d’Oportunitats de Minimització)a l'Ajuntament de Banyoles. Una DAOM és una eina desenvolupada pel Centre per a l’Empresa i el Medi Ambient, que consisteix en l’avaluació d’una activitat o procés, per determinar les possibles oportunitats de prevenció i reducció en origen de la contaminació, i aportar-hi alternatives d’actuació tècnica i econòmicament viables
Resumo:
In the context of fading channels it is well established that, with a constrained transmit power, the bit rates achievable by signals that are not peaky vanish as the bandwidth grows without bound. Stepping back from the limit, we characterize the highest bit rate achievable by such non-peaky signals and the approximate bandwidth where that apex occurs. As it turns out, the gap between the highest rate achievable without peakedness and the infinite-bandwidth capacity (with unconstrained peakedness) is small for virtually all settings of interest to wireless communications. Thus, although strictly achieving capacity in wideband fading channels does require signal peakedness, bit rates not far from capacity can be achieved with conventional signaling formats that do not exhibit the serious practical drawbacks associated with peakedness. In addition, we show that the asymptotic decay of bit rate in the absence of peakedness usually takes hold at bandwidths so large that wideband fading models are called into question. Rather, ultrawideband models ought to be used.
Resumo:
For single-user MIMO communication with uncoded and coded QAM signals, we propose bit and power loading schemes that rely only on channel distribution information at the transmitter. To that end, we develop the relationship between the average bit error probability at the output of a ZF linear receiver and the bit rates and powers allocated at the transmitter. This relationship, and the fact that a ZF receiver decouples the MIMO parallel channels, allow leveraging bit loading algorithms already existing in the literature. We solve dual bit rate maximization and power minimization problems and present performance resultsthat illustrate the gains of the proposed scheme with respect toa non-optimized transmission.
Resumo:
BACKGROUND: Multimodality treatment suites for patients with cerebral arteriovenous malformations (AVM) have recently become available. This study was designed to evaluate feasibility, safety and impact on treatment of a new intraoperative flat-panel (FP) based integrated surgical and imaging suite for combined endovascular and surgical treatment of cerebral AVM. METHODS: Twenty-five patients with AVMs to treat with combined endovascular and surgical interventions were prospectively enrolled in this consecutive case series. The hybrid suite allows combined endovascular and surgical approaches with intraoperative scanner-like imaging (XperCT®) and intraoperative 3D rotational angiography (3D-RA). The impact of intraoperative multimodal imaging on feasibility, workflow of combined interventions, surgery, and unexpected imaging findings were analyzed. RESULTS: Twenty-five patients (mean age 38 ± 18.6 year) with a median Spetzler-Martin grade 2 AVM (range 1-4) underwent combined endovascular and surgical procedures. Sixteen patients presented with a ruptured AVM and nine with an unruptured AVM. In 16 % (n = 4) of cases, intraoperative imaging visualized AVM remnants ≤3 mm and allowed for completion of the resections in the same sessions. Complete resection was confirmed in all n = 16 patients who had follow-up angiography one year after surgery so far. All diagnostic and therapeutical steps, including angiographic control, were performed without having to move the patients CONCLUSION: The hybrid neurointerventional suite was shown to be a safe and useful setup which allowed for unconstrained combined microsurgical and neuroradiological workflow. It reduces the need for extraoperative angiographic controls and subsequent potential surgical revisions a second time, as small AVM remnants can be detected with high security.
Resumo:
We derive a new inequality for uniform deviations of averages from their means. The inequality is a common generalization of previous results of Vapnik and Chervonenkis (1974) and Pollard (1986). Usingthe new inequality we obtain tight bounds for empirical loss minimization learning.
Resumo:
The drug discovery process has been deeply transformed recently by the use of computational ligand-based or structure-based methods, helping the lead compounds identification and optimization, and finally the delivery of new drug candidates more quickly and at lower cost. Structure-based computational methods for drug discovery mainly involve ligand-protein docking and rapid binding free energy estimation, both of which require force field parameterization for many drug candidates. Here, we present a fast force field generation tool, called SwissParam, able to generate, for arbitrary small organic molecule, topologies, and parameters based on the Merck molecular force field, but in a functional form that is compatible with the CHARMM force field. Output files can be used with CHARMM or GROMACS. The topologies and parameters generated by SwissParam are used by the docking software EADock2 and EADock DSS to describe the small molecules to be docked, whereas the protein is described by the CHARMM force field, and allow them to reach success rates ranging from 56 to 78%. We have also developed a rapid binding free energy estimation approach, using SwissParam for ligands and CHARMM22/27 for proteins, which requires only a short minimization to reproduce the experimental binding free energy of 214 ligand-protein complexes involving 62 different proteins, with a standard error of 2.0 kcal mol(-1), and a correlation coefficient of 0.74. Together, these results demonstrate the relevance of using SwissParam topologies and parameters to describe small organic molecules in computer-aided drug design applications, together with a CHARMM22/27 description of the target protein. SwissParam is available free of charge for academic users at www.swissparam.ch.
Resumo:
This paper studies the interactions between financing constraints and theemployment decisions of firms when both fixed-term and permanent employmentcontracts are available. We first develop a dynamic model that shows theeffects of financing constraints and firing costs on employment decisions. Oncecalibrated, the model shows that financially constrained firms tend to use moreintensely fixed term workers, and to make them absorb a larger fraction of thetotal employment volatility than financially unconstrained firms do. We testand confirm the predictions of the model on a unique panel data of Italian manufacturingfirms with detailed information about the type of workers employedby the firms and about firm financing constraints.
Resumo:
We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical {\sc vc} dimension, empirical {\sc vc} entropy, andmargin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.
Resumo:
This paper describes an optimized model to support QoS by mean of Congestion minimization on LSPs (Label Switching Path). In order to perform this model, we start from a CFA (Capacity and Flow Allocation) model. As this model does not consider the buffer size to calculate the capacity cost, our model- named BCA (Buffer Capacity Allocation)- take into account this issue and it improve the CFA performance. To test our proposal, we perform several simulations; results show that BCA model minimizes LSP congestion and uniformly distributes flows on the network
Resumo:
En aquests últims anys, són moltes les empreses que han optat per la utilització de sistemes de gestió normalitzats, per a garantir la rendibilitat i fiabilitat dels resultats de la implantació del sistema de gestió en qüestió. A la dècada dels 90 va ser quan la implantació de sistemes de gestió va començar a ser important en la majoria de sectors econòmics. L’evolució en els sistemes de gestió a trets generals va iniciar-se primerament en l’àmbit de la qualitat, seguidament en la gestió ambiental i en última instància en la prevenció de riscos laborals. Aquests tres tipus de sistemes de gestió, en els últims anys s’han anat integrant, de manera que s’han reduït els recursos i els esforços emprats en la gestió, millorant significativament l’eficàcia i l’eficiència d’aquests sistemes. L’objectiu principal que persegueix aquest projecte, és definir un sistema de gestió que permeti a l’empresa conduir les seves activitats de forma simplificada i ordenada, i que alhora faciliti la informació necessària per a corregir i millorar les activitats. Un altre objectiu que pretén aconseguir aquest projecte, és el de dissenyar un SGI que aprofiti les sinèrgies generades en els diferents àmbits de la pròpia empresa i fomenti les interaccions entre els diferents nivells de l’organització. En conseqüència, millorarà de forma important els fluxos d’informació dins de l’empresa minimitzant els esforços i la pèrdua d’informació. El mètode escollit per a la implantació del SGI, ha estat la Gestió per Processos, la qual es basa en la definició i seguiment dels processos de l’empresa, partint de les necessitats del client i acabant quan aquestes estan satisfetes. En conclusió, a la finalització del present projecte s’obtindrà un SGI, amb tots els processos de l’empresa definits i implantats, que doni compliment a les normes UNEEN-ISO 9001:00, UNE-EN-ISO 14001:04 i OHSAS 18001:07. Aquest SGI, que s’ha realitzat des d’un punt de vista documental i teòric, suposarà una millora de l’eficàcia operativa dels processos i una important millora competitiva de l’empresa.
Resumo:
PURPOSE: To explore whether triaxial accelerometric measurements can be utilized to accurately assess speed and incline of running in free-living conditions. METHODS: Body accelerations during running were recorded at the lower back and at the heel by a portable data logger in 20 human subjects, 10 men, and 10 women. After parameterizing body accelerations, two neural networks were designed to recognize each running pattern and calculate speed and incline. Each subject ran 18 times on outdoor roads at various speeds and inclines; 12 runs were used to calibrate the neural networks whereas the 6 other runs were used to validate the model. RESULTS: A small difference between the estimated and the actual values was observed: the square root of the mean square error (RMSE) was 0.12 m x s(-1) for speed and 0.014 radiant (rad) (or 1.4% in absolute value) for incline. Multiple regression analysis allowed accurate prediction of speed (RMSE = 0.14 m x s(-1)) but not of incline (RMSE = 0.026 rad or 2.6% slope). CONCLUSION: Triaxial accelerometric measurements allows an accurate estimation of speed of running and incline of terrain (the latter with more uncertainty). This will permit the validation of the energetic results generated on the treadmill as applied to more physiological unconstrained running conditions.
Resumo:
Tractography algorithms provide us with the ability to non-invasively reconstruct fiber pathways in the white matter (WM) by exploiting the directional information described with diffusion magnetic resonance. These methods could be divided into two major classes, local and global. Local methods reconstruct each fiber tract iteratively by considering only directional information at the voxel level and its neighborhood. Global methods, on the other hand, reconstruct all the fiber tracts of the whole brain simultaneously by solving a global energy minimization problem. The latter have shown improvements compared to previous techniques but these algorithms still suffer from an important shortcoming that is crucial in the context of brain connectivity analyses. As no anatomical priors are usually considered during the reconstruction process, the recovered fiber tracts are not guaranteed to connect cortical regions and, as a matter of fact, most of them stop prematurely in the WM; this violates important properties of neural connections, which are known to originate in the gray matter (GM) and develop in the WM. Hence, this shortcoming poses serious limitations for the use of these techniques for the assessment of the structural connectivity between brain regions and, de facto, it can potentially bias any subsequent analysis. Moreover, the estimated tracts are not quantitative, every fiber contributes with the same weight toward the predicted diffusion signal. In this work, we propose a novel approach for global tractography that is specifically designed for connectivity analysis applications which: (i) explicitly enforces anatomical priors of the tracts in the optimization and (ii) considers the effective contribution of each of them, i.e., volume, to the acquired diffusion magnetic resonance imaging (MRI) image. We evaluated our approach on both a realistic diffusion MRI phantom and in vivo data, and also compared its performance to existing tractography algorithms.
Resumo:
Diffusion MRI is a well established imaging modality providing a powerful way to non-invasively probe the structure of the white matter. Despite the potential of the technique, the intrinsic long scan times of these sequences have hampered their use in clinical practice. For this reason, a wide variety of methods have been proposed to shorten acquisition times. [...] We here review a recent work where we propose to further exploit the versatility of compressed sensing and convex optimization with the aim to characterize the fiber orientation distribution sparsity more optimally. We re-formulate the spherical deconvolution problem as a constrained l0 minimization.
Resumo:
OBJECTIVE: To describe a method to obtain a profile of the duration and intensity (speed) of walking periods over 24 hours in women under free-living conditions. DESIGN: A new method based on accelerometry was designed for analyzing walking activity. In order to take into account inter-individual variability of acceleration, an individual calibration process was used. Different experiments were performed to highlight the variability of acceleration vs walking speed relationship, to analyze the speed prediction accuracy of the method, and to test the assessment of walking distance and duration over 24-h. SUBJECTS: Twenty-eight women were studied (mean+/-s.d.) age: 39.3+/-8.9 y; body mass: 79.7+/-11.1 kg; body height: 162.9+/-5.4 cm; and body mass index (BMI) 30.0+/-3.8 kg/m(2). RESULTS: Accelerometer output was significantly correlated with speed during treadmill walking (r=0.95, P<0.01), and short unconstrained walks (r=0.86, P<0.01), although with a large inter-individual variation of the regression parameters. By using individual calibration, it was possible to predict walking speed on a standard urban circuit (predicted vs measured r=0.93, P<0.01, s.e.e.=0.51 km/h). In the free-living experiment, women spent on average 79.9+/-36.0 (range: 31.7-168.2) min/day in displacement activities, from which discontinuous short walking activities represented about 2/3 and continuous ones 1/3. Total walking distance averaged 2.1+/-1.2 (range: 0.4-4.7) km/day. It was performed at an average speed of 5.0+/-0.5 (range: 4.1-6.0) km/h. CONCLUSION: An accelerometer measuring the anteroposterior acceleration of the body can estimate walking speed together with the pattern, intensity and duration of daily walking activity.
Resumo:
The energy and structure of a dilute hard-disks Bose gas are studied in the framework of a variational many-body approach based on a Jastrow correlated ground-state wave function. The asymptotic behaviors of the radial distribution function and the one-body density matrix are analyzed after solving the Euler equation obtained by a free minimization of the hypernetted chain energy functional. Our results show important deviations from those of the available low density expansions, already at gas parameter values x~0.001 . The condensate fraction in 2D is also computed and found generally lower than the 3D one at the same x.