952 resultados para Non-response model approach
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The article discusses a proposal of displacement measurement using a unique digital camera aiming at to exploit its feasibility for Modal Analysis applications. The proposal discusses a non-contact measuring approach able to measure multiple points simultaneously by using a unique digital camera. A modal analysis of a reduced scale lab building structure based only at the responses of the structure measured with the camera is presented. It focuses at the feasibility of using a simple ordinary camera for performing the output only modal analysis of structures and its advantage. The modal parameters of the structure are estimated from the camera data and also by using ordinary experimental modal analysis based on the Frequency Response Function (FRF) obtained by using the usual sensors like accelerometer and force cell. The comparison of the both analysis showed that the technique is promising noncontact measuring tool relatively simple and effective to be used in structural modal analysis
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Usually we observe that Bio-physical systems or Bio-chemical systems are many a time based on nanoscale phenomenon in different host environments, which involve many particles can often not be solved explicitly. Instead a physicist, biologist or a chemist has to rely either on approximate or numerical methods. For a certain type of systems, called integrable in nature, there exist particular mathematical structures and symmetries which facilitate the exact and explicit description. Most integrable systems, we come across are low-dimensional, for instance, a one-dimensional chain of coupled atoms in DNA molecular system with a particular direction or exist as a vector in the environment. This theoretical research paper aims at bringing one of the pioneering ‘Reaction-Diffusion’ aspects of the DNA-plasma material system based on an integrable lattice model approach utilizing quantized functional algebras, to disseminate the new developments, initiate novel computational and design paradigms.
Resumo:
Introduction. A large number of patients with chronic hepatitis C have not been cured with interferon-based therapy. Therefore, we evaluated the efficacy of amantadine combined with the standard of care (pegylated interferon plus ribavirin) in patients who had not responded to or had relapsed after 24 weeks of treatment with conventional interferon plus ribavirin. Material and methods. Patients stratified by previous response (i.e., non-response or relapse) were randomized to 48 weeks of open-label treatment with peginterferon alfa-2a (401(D) 180 pg/week plus ribavirin 1,000/1,200 mg/day plus amantadine 200 mg/day (triple therapy), or the standard of care (peginterferon alfa-2a [40KD] plus ribavirin). Results. The primary outcome was sustained virological response (SVR), defined as undetectable hepatitis C virus RNA in serum (< 50 IU/mL) at end of follow-up (week 72). Among patients with a previous non-response, 12/53 (22.6%; 95% confidence interval [CI] 12.3-36.2%) randomized to triple therapy achieved an SVR compared with 16/52 (30.8%; 95% CI 18.7-45.1%) randomized to the standard of care. Among patients with a previous relapse 22/39 (56.4%; 95% CI 39.6-72.2%) randomized to triple therapy achieved an SVR compared with 23/38 (60.5%; 95% CI 43.4-76.0%) randomized to the standard of care. Undetectable HCV RNA (< 50 IU/mL) at week 12 had a high positive predictive value for SVR. A substantial proportion of non-responders and relapsers to conventional interferon plus ribavirin achieve an SVR when re-treated with peginterferon alfa-2a (40KD) plus ribavirin. Conclusion. Amantadine does not enhance SVR rates in previously treated patients with chronic hepatitis C and cannot be recommended in this setting.
Resumo:
Introduction. A large number of patients with chronic hepatitis C have not been cured with interferon-based therapy. Therefore, we evaluated the efficacy of amantadine combined with the standard of care (pegylated interferon plus ribavirin) in patients who had not responded to or had relapsed after 24 weeks of treatment with conventional interferon plus ribavirin. Material and methods. Patients stratified by previous response (i.e., non-response or relapse) were randomized to 48 weeks of open-label treatment with peginterferon alfa-2a (401(D) 180 pg/week plus ribavirin 1,000/1,200 mg/day plus amantadine 200 mg/day (triple therapy), or the standard of care (peginterferon alfa-2a [40KD] plus ribavirin). Results. The primary outcome was sustained virological response (SVR), defined as undetectable hepatitis C virus RNA in serum (< 50 IU/mL) at end of follow-up (week 72). Among patients with a previous non-response, 12/53 (22.6%; 95% confidence interval [CI] 12.3-36.2%) randomized to triple therapy achieved an SVR compared with 16/52 (30.8%; 95% CI 18.7-45.1%) randomized to the standard of care. Among patients with a previous relapse 22/39 (56.4%; 95% CI 39.6-72.2%) randomized to triple therapy achieved an SVR compared with 23/38 (60.5%; 95% CI 43.4-76.0%) randomized to the standard of care. Undetectable HCV RNA (< 50 IU/mL) at week 12 had a high positive predictive value for SVR. A substantial proportion of non-responders and relapsers to conventional interferon plus ribavirin achieve an SVR when re-treated with peginterferon alfa-2a (40KD) plus ribavirin. Conclusion. Amantadine does not enhance SVR rates in previously treated patients with chronic hepatitis C and cannot be recommended in this setting.
Resumo:
Model predictive control (MPC) applications in the process industry usually deal with process systems that show time delays (dead times) between the system inputs and outputs. Also, in many industrial applications of MPC, integrating outputs resulting from liquid level control or recycle streams need to be considered as controlled outputs. Conventional MPC packages can be applied to time-delay systems but stability of the closed loop system will depend on the tuning parameters of the controller and cannot be guaranteed even in the nominal case. In this work, a state space model based on the analytical step response model is extended to the case of integrating time systems with time delays. This model is applied to the development of two versions of a nominally stable MPC, which is designed to the practical scenario in which one has targets for some of the inputs and/or outputs that may be unreachable and zone control (or interval tracking) for the remaining outputs. The controller is tested through simulation of a multivariable industrial reactor system. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
In Performance-Based Earthquake Engineering (PBEE), evaluating the seismic performance (or seismic risk) of a structure at a designed site has gained major attention, especially in the past decade. One of the objectives in PBEE is to quantify the seismic reliability of a structure (due to the future random earthquakes) at a site. For that purpose, Probabilistic Seismic Demand Analysis (PSDA) is utilized as a tool to estimate the Mean Annual Frequency (MAF) of exceeding a specified value of a structural Engineering Demand Parameter (EDP). This dissertation focuses mainly on applying an average of a certain number of spectral acceleration ordinates in a certain interval of periods, Sa,avg (T1,…,Tn), as scalar ground motion Intensity Measure (IM) when assessing the seismic performance of inelastic structures. Since the interval of periods where computing Sa,avg is related to the more or less influence of higher vibration modes on the inelastic response, it is appropriate to speak about improved IMs. The results using these improved IMs are compared with a conventional elastic-based scalar IMs (e.g., pseudo spectral acceleration, Sa ( T(¹)), or peak ground acceleration, PGA) and the advanced inelastic-based scalar IM (i.e., inelastic spectral displacement, Sdi). The advantages of applying improved IMs are: (i ) "computability" of the seismic hazard according to traditional Probabilistic Seismic Hazard Analysis (PSHA), because ground motion prediction models are already available for Sa (Ti), and hence it is possibile to employ existing models to assess hazard in terms of Sa,avg, and (ii ) "efficiency" or smaller variability of structural response, which was minimized to assess the optimal range to compute Sa,avg. More work is needed to assess also "sufficiency" and "scaling robustness" desirable properties, which are disregarded in this dissertation. However, for ordinary records (i.e., with no pulse like effects), using the improved IMs is found to be more accurate than using the elastic- and inelastic-based IMs. For structural demands that are dominated by the first mode of vibration, using Sa,avg can be negligible relative to the conventionally-used Sa (T(¹)) and the advanced Sdi. For structural demands with sign.cant higher-mode contribution, an improved scalar IM that incorporates higher modes needs to be utilized. In order to fully understand the influence of the IM on the seismis risk, a simplified closed-form expression for the probability of exceeding a limit state capacity was chosen as a reliability measure under seismic excitations and implemented for Reinforced Concrete (RC) frame structures. This closed-form expression is partuclarly useful for seismic assessment and design of structures, taking into account the uncertainty in the generic variables, structural "demand" and "capacity" as well as the uncertainty in seismic excitations. The assumed framework employs nonlinear Incremental Dynamic Analysis (IDA) procedures in order to estimate variability in the response of the structure (demand) to seismic excitations, conditioned to IM. The estimation of the seismic risk using the simplified closed-form expression is affected by IM, because the final seismic risk is not constant, but with the same order of magnitude. Possible reasons concern the non-linear model assumed, or the insufficiency of the selected IM. Since it is impossibile to state what is the "real" probability of exceeding a limit state looking the total risk, the only way is represented by the optimization of the desirable properties of an IM.
Resumo:
Leaf rust caused by Puccinia triticina is a serious disease of durum wheat (Triticum durum) worldwide. However, genetic and molecular mapping studies aimed at characterizing leaf rust resistance genes in durum wheat have been only recently undertaken. The Italian durum wheat cv. Creso shows a high level of resistance to P. triticina that has been considered durable and that appears to be due to a combination of a single dominant gene and one or more additional factors conferring partial resistance. In this study, the genetic basis of leaf rust resistance carried by Creso was investigated using 176 recombinant inbred lines (RILs) from the cross between the cv. Colosseo (C, leaf rust resistance donor) and Lloyd (L, susceptible parent). Colosseo is a cv. directly related to Creso with the leaf rust resistance phenotype inherited from Creso, and was considered as resistance donor because of its better adaptation to local (Emilia Romagna, Italy) cultivation environment. RILs have been artificially inoculated with a mixture of 16 Italian P. triticina isolates that were characterized for virulence to seedlings of 22 common wheat cv. Thatcher isolines each carrying a different leaf rust resistance gene, and for molecular genotypes at 15 simple sequence repeat (SSR) loci, in order to determine their specialization with regard to the host species. The characterization of the leaf rust isolates was conducted at the Cereal Disease Laboratory of the University of Minnesota (St. Paul, USA) (Chapter 2). A genetic linkage map was constructed using segregation data from the population of 176 RILs from the cross CL. A total of 662 loci, including 162 simple sequence repeats (SSRs) and 500 Diversity Arrays Technology markers (DArTs), were analyzed by means of the package EasyMap 0.1. The integrated SSR-DArT linkage map consisted of 554 loci (162 SSR and 392 DArT markers) grouped into 19 linkage blocks with an average marker density of 5.7 cM/marker. The final map spanned a total of 2022 cM, which correspond to a tetraploid genome (AABB) coverage of ca. 77% (Chapter 3). The RIL population was phenotyped for their resistance to leaf rust under artificial inoculation in 2006; the percentage of infected leaf area (LRS, leaf rust susceptibility) was evaluated at three stages through the disease developmental cycle and the area under disease progress curve (AUDPC) was then calculated. The response at the seedling stage (infection type, IT) was also investigated. QTL analysis was carried out by means of the Composite Interval Mapping method based on a selection of markers from the CL map. A major QTL (QLr.ubo-7B.2) for leaf rust resistance controlling both the seedling and the adult plant response, was mapped on the distal region of chromosome arm 7BL (deletion bin 7BL10-0.78-1.00), in a gene-dense region known to carry several genes/QTLs for resistance to rusts and other major cereal fungal diseases in wheat and barley. QLr.ubo-7B.2 was identified within a supporting interval of ca. 5 cM tightly associated with three SSR markers (Xbarc340.2, Xgwm146 e Xgwm344.2), and showed an R2 and an LOD peak value for the AUDPC equal to 72.9% an 44.5, respectively. Three additional minor QTLs were also detected (QLr.ubo-7B.1 on chr. 7BS; QLr.ubo-2A on chr. 2AL and QLr.ubo-3A on chr. 3AS) (Chapter 4). The presence of the major QTL (QLr.ubo-7B.2) was validated by a linkage disequilibrium (LD)-based test using field data from two different plant materials: i) a set of 62 advanced lines from multiple crosses involving Creso and his directly related resistance derivates Colosseo and Plinio, and ii) a panel of 164 elite durum wheat accessions representative of the major durum breeding program of the Mediterranean basin. Lines and accessions were phenotyped for leaf rust resistance under artificial inoculation in two different field trials carried out at Argelato (BO, Italy) in 2006 and 2007; the durum elite accessions were also evaluated in two additional field experiments in Obregon (Messico; 2007 and 2008) and in a green-house experiment (seedling resistance) at the Cereal Disease Laboratory (St. Paul, USA, 2008). The molecular characterization involved 14 SSR markers mapping on the 7BL chromosome region found to harbour the major QTL. Association analysis was then performed with a mixed-linear-model approach. Results confirmed the presence of a major QTL for leaf rust resistance, both at adult plant and at seedling stage, located between markers Xbarc340.2, Xgwm146 and Xgwm344.2, in an interval that coincides with the supporting interval (LOD-2) of QLr.ubo-7B.2 as resulted from the RIL QTL analysis. (Chapter 5). The identification and mapping of the major QTL associated to the durable leaf rust resistance carried by Creso, together with the identification of the associated SSR markers, will enhance the selection efficiency in durum wheat breeding programs (MAS, Marker Assisted Selection) and will accelerate the release of cvs. with durable resistance through marker-assisted pyramiding of the tagged resistance genes/QTLs most effective against wheat fungal pathogens.
Resumo:
In this thesis, the field of study related to the stability analysis of fluid saturated porous media is investigated. In particular the contribution of the viscous heating to the onset of convective instability in the flow through ducts is analysed. In order to evaluate the contribution of the viscous dissipation, different geometries, different models describing the balance equations and different boundary conditions are used. Moreover, the local thermal non-equilibrium model is used to study the evolution of the temperature differences between the fluid and the solid matrix in a thermal boundary layer problem. On studying the onset of instability, different techniques for eigenvalue problems has been used. Analytical solutions, asymptotic analyses and numerical solutions by means of original and commercial codes are carried out.
Resumo:
In dieser Arbeit aus dem Bereich der Wenig-Nukleonen-Physik wird die neu entwickelte Methode der Lorentz Integral Transformation (LIT) auf die Untersuchung von Kernphotoabsorption und Elektronenstreuung an leichten Kernen angewendet. Die LIT-Methode ermoeglicht exakte Rechnungen durchzufuehren, ohne explizite Bestimmung der Endzustaende im Kontinuum. Das Problem wird auf die Loesung einer bindungzustandsaehnlichen Gleichung reduziert, bei der die Endzustandswechselwirkung vollstaendig beruecksichtigt wird. Die Loesung der LIT-Gleichung wird mit Hilfe einer Entwicklung nach hypersphaerischen harmonischen Funktionen durchgefuehrt, deren Konvergenz durch Anwendung einer effektiven Wechselwirkung im Rahmem des hypersphaerischen Formalismus (EIHH) beschleunigt wird. In dieser Arbeit wird die erste mikroskopische Berechnung des totalen Wirkungsquerschnittes fuer Photoabsorption unterhalb der Pionproduktionsschwelle an 6Li, 6He und 7Li vorgestellt. Die Rechnungen werden mit zentralen semirealistischen NN-Wechselwirkungen durchgefuehrt, die die Tensor Kraft teilweise simulieren, da die Bindungsenergien von Deuteron und von Drei-Teilchen-Kernen richtig reproduziert werden. Der Wirkungsquerschnitt fur Photoabsorption an 6Li zeigt nur eine Dipol-Riesenresonanz, waehrend 6He zwei unterschiedliche Piks aufweist, die dem Aufbruch vom Halo und vom Alpha-Core entsprechen. Der Vergleich mit experimentellen Daten zeigt, dass die Addition einer P-Wellen-Wechselwirkung die Uebereinstimmung wesentlich verbessert. Bei 7Li wird nur eine Dipol-Riesenresonanz gefunden, die gut mit den verfuegbaren experimentellen Daten uebereinstimmt. Bezueglich der Elektronenstreuung wird die Berechnung der longitudinalen und transversalen Antwortfunktionen von 4He im quasi-elastischen Bereich fuer mittlere Werte des Impulsuebertrages dargestellt. Fuer die Ladungs- und Stromoperatoren wird ein nichtrelativistisches Modell verwendet. Die Rechnungen sind mit semirealistischen Wechselwirkungen durchgefuert und ein eichinvarianter Strom wird durch die Einfuehrung eines Mesonaustauschstroms gewonnen. Die Wirkung des Zweiteilchenstroms auf die transversalen Antwortfunktionen wird untersucht. Vorlaeufige Ergebnisse werden gezeigt und mit den verfuegbaren experimentellen Daten verglichen.
Resumo:
In the course of this work the effect of metal substitution on the structural and magnetic properties of the double perovskites Sr2MM’O6 (M = Fe, substituted by Cr, Zn and Ga; M’ = Re, substituted by Sb) was explored by means of X-ray diffraction, magnetic measurements, band structure calculations, Mößbauer spectroscopy and conductivity measurements. The focus of this study was the determination of (i) the kind and structural boundary conditions of the magnetic interaction between the M and M’ cations and (ii) the conditions for the principal application of double perovskites as spintronic materials by means of the band model approach. Strong correlations between the electronic, structural and magnetic properties have been found during the study of the double perovskites Sr2Fe1-xMxReO6 (0 < x < 1, M = Zn, Cr). The interplay between van Hove-singularity and Fermi level plays a crucial role for the magnetic properties. Substitution of Fe by Cr in Sr2FeReO6 leads to a non-monotonic behaviour of the saturation magnetization (MS) and an enhancement for substitution levels up to 10 %. The Curie temperatures (TC) monotonically increase from 401 to 616 K. In contrast, Zn substitution leads to a continuous decrease of MS and TC. The diamagnetic dilution of the Fe-sublattice by Zn leads to a transition from an itinerant ferrimagnetic to a localized ferromagnetic material. Thus, Zn substitution inhibits the long-range ferromagnetic interaction within the Fe-sublattice and preserves the long-range ferromagnetic interaction within the Re-sublattice. Superimposed on the electronic effects is the structural influence which can be explained by size effects modelled by the tolerance factor t. In the case of Cr substitution, a tetragonal – cubic transformation for x > 0.4 is observed. For Zn substituted samples the tetragonal distortion linearly increases with increasing Zn content. In order to elucidate the nature of the magnetic interaction between the M and M’ cations, Fe and Re were substituted by the valence invariant main group metals Ga and Sb, respectively. X-ray diffraction reveals Sr2FeRe1-xSbxO6 (0 < x < 0.9) to crystallize without antisite disorder in the tetragonal distorted perovskite structure (space group I4/mmm). The ferrimagnetic behaviour of the parent compound Sr2FeReO6 changes to antiferromagnetic upon Sb substitution as determined by magnetic susceptibility measurements. Samples up to a doping level of 0.3 are ferrimagnetic, while Sb contents higher than 0.6 result in an overall antiferromagnetic behaviour. 57Fe Mößbauer results show a coexistence of ferri- and antiferromagnetic clusters within the same perovskite-type crystal structure in the Sb substitution range 0.3 < x < 0.8, whereas Sr2FeReO6 and Sr2FeRe0.9Sb0.1O6 are “purely” ferrimagnetic and Sr2FeRe0.1Sb0.9O6 contains antiferromagnetically ordered Fe sites only. Consequently, a replacement of the Re atoms by a nonmagnetic main group element such as Sb blocks the double exchange pathways Fe–O–Re(Sb)–O–Fe along the crystallographic axis of the perovskite unit cell and destroys the itinerant magnetism of the parent compound. The structural and magnetic characterization of Sr2Fe1-xGaxReO6 (0 < x < 0.7) exhibit a Ga/Re antisite disorder which is unexpected because the parent compound Sr2FeReO6 shows no Fe/Re antisite disorder. This antisite disorder strongly depends on the Ga content of the sample. Although the X-ray data do not hint at a phase separation, sample inhomogeneities caused by a demixing are observed by a combination of magnetic characterization and Mößbauer spectroscopy. The 57Fe Mößbauer data suggest the formation of two types of clusters, ferrimagnetic Fe- and paramagnetic Ga-based ones. Below 20 % Ga content, Ga statistically dilutes the Fe–O–Re–O–Fe double exchange pathways. Cluster formation begins at x = 0.2, for 0.2 < x < 0.4 the paramagnetic Ga-based clusters do not contain any Fe. Fe containing Ga-based clusters which can be detected by Mößbauer spectroscopy firstly appear for x = 0.4.
Resumo:
Nell’attuale contesto di aumento degli impatti antropici e di “Global Climate Change” emerge la necessità di comprenderne i possibili effetti di questi sugli ecosistemi inquadrati come fruitori di servizi e funzioni imprescindibili sui quali si basano intere tessiture economiche e sociali. Lo studio previsionale degli ecosistemi si scontra con l’elevata complessità di questi ultimi in luogo di una altrettanto elevata scarsità di osservazioni integrate. L’approccio modellistico appare il più adatto all’analisi delle dinamiche complesse degli ecosistemi ed alla contestualizzazione complessa di risultati sperimentali ed osservazioni empiriche. L’approccio riduzionista-deterministico solitamente utilizzato nell’implementazione di modelli non si è però sin qui dimostrato in grado di raggiungere i livelli di complessità più elevati all’interno della struttura eco sistemica. La componente che meglio descrive la complessità ecosistemica è quella biotica in virtù dell’elevata dipendenza dalle altre componenti e dalle loro interazioni. In questo lavoro di tesi viene proposto un approccio modellistico stocastico basato sull’utilizzo di un compilatore naive Bayes operante in ambiente fuzzy. L’utilizzo congiunto di logica fuzzy e approccio naive Bayes è utile al processa mento del livello di complessità e conseguentemente incertezza insito negli ecosistemi. I modelli generativi ottenuti, chiamati Fuzzy Bayesian Ecological Model(FBEM) appaiono in grado di modellizare gli stati eco sistemici in funzione dell’ elevato numero di interazioni che entrano in gioco nella determinazione degli stati degli ecosistemi. Modelli FBEM sono stati utilizzati per comprendere il rischio ambientale per habitat intertidale di spiagge sabbiose in caso di eventi di flooding costiero previsti nell’arco di tempo 2010-2100. L’applicazione è stata effettuata all’interno del progetto EU “Theseus” per il quale i modelli FBEM sono stati utilizzati anche per una simulazione a lungo termine e per il calcolo dei tipping point specifici dell’habitat secondo eventi di flooding di diversa intensità.
Resumo:
Solo il 60% dei candidati alla resincronizzazione cardiaca risponde in termini di rimodellamento ventricolare inverso che è il più forte predittore di riduzione della mortalità e delle ospedalizzazioni. Due cause possibili della mancata risposta sono la programmazione del dispositivo e i limiti dell’ approccio transvenoso. Nel corso degli anni di dottorato ho effettuato tre studi per ridurre il numero di non responder. Il primo studio valuta il ritardo interventricolare. Al fine di ottimizzare le risorse e fornire un reale beneficio per il paziente ho ricercato la presenza di predittori di ritardo interventricolare diverso dal simultaneo, impostato nella programmazione di base. L'unico predittore è risultato essere l’ intervallo QRS> 160 ms, quindi ho proposto una flow chart per ottimizzare solo i pazienti che avranno nella programmazione ottimale un intervallo interventricolare non simultaneo. Il secondo lavoro valuta la fissazione attiva del ventricolo sinistro con stent. I dislocamenti, la soglia alta di stimolazione del miocardio e la stimolazione del nervo frenico sono tre problematiche che limitano la stimolazione biventricolare. Abbiamo analizzato più di 200 angiografie per vedere le condizioni anatomiche predisponenti la dislocazione del catetere. Prospetticamente abbiamo deciso di utilizzare uno stent per fissare attivamente il catetere ventricolare sinistro in tutti i pazienti che presentavano le caratteristiche anatomiche favorenti la dislocazione. Non ci sono più state dislocazioni, c’è stata una migliore risposta in termini di rimodellamento ventricolare inverso e non ci sono state modifiche dei parametri elettrici del catetere. Il terzo lavoro ha valutato sicurezza ed efficacia della stimolazione endoventricolare sinistra. Abbiamo impiantato 26 pazienti giudicati non responder alla terapia di resincronizzazione cardiaca. La procedura è risultata sicura, il rischio di complicanze è simile alla stimolazione biventricolare classica, ed efficace nell’arrestare la disfunzione ventricolare sinistra e / o migliorare gli effetti clinici in un follow-up medio.
Resumo:
This work presents exact algorithms for the Resource Allocation and Cyclic Scheduling Problems (RA&CSPs). Cyclic Scheduling Problems arise in a number of application areas, such as in hoist scheduling, mass production, compiler design (implementing scheduling loops on parallel architectures), software pipelining, and in embedded system design. The RA&CS problem concerns time and resource assignment to a set of activities, to be indefinitely repeated, subject to precedence and resource capacity constraints. In this work we present two constraint programming frameworks facing two different types of cyclic problems. In first instance, we consider the disjunctive RA&CSP, where the allocation problem considers unary resources. Instances are described through the Synchronous Data-flow (SDF) Model of Computation. The key problem of finding a maximum-throughput allocation and scheduling of Synchronous Data-Flow graphs onto a multi-core architecture is NP-hard and has been traditionally solved by means of heuristic (incomplete) algorithms. We propose an exact (complete) algorithm for the computation of a maximum-throughput mapping of applications specified as SDFG onto multi-core architectures. Results show that the approach can handle realistic instances in terms of size and complexity. Next, we tackle the Cyclic Resource-Constrained Scheduling Problem (i.e. CRCSP). We propose a Constraint Programming approach based on modular arithmetic: in particular, we introduce a modular precedence constraint and a global cumulative constraint along with their filtering algorithms. Many traditional approaches to cyclic scheduling operate by fixing the period value and then solving a linear problem in a generate-and-test fashion. Conversely, our technique is based on a non-linear model and tackles the problem as a whole: the period value is inferred from the scheduling decisions. The proposed approaches have been tested on a number of non-trivial synthetic instances and on a set of realistic industrial instances achieving good results on practical size problem.