818 resultados para Linear matrix inequalities (LMI) techniques
Resumo:
The aim of this Doctoral Thesis is to develop a genetic algorithm based optimization methods to find the best conceptual design architecture of an aero-piston-engine, for given design specifications. Nowadays, the conceptual design of turbine airplanes starts with the aircraft specifications, then the most suited turbofan or turbo propeller for the specific application is chosen. In the aeronautical piston engines field, which has been dormant for several decades, as interest shifted towards turboaircraft, new materials with increased performance and properties have opened new possibilities for development. Moreover, the engine’s modularity given by the cylinder unit, makes it possible to design a specific engine for a given application. In many real engineering problems the amount of design variables may be very high, characterized by several non-linearities needed to describe the behaviour of the phenomena. In this case the objective function has many local extremes, but the designer is usually interested in the global one. The stochastic and the evolutionary optimization techniques, such as the genetic algorithms method, may offer reliable solutions to the design problems, within acceptable computational time. The optimization algorithm developed here can be employed in the first phase of the preliminary project of an aeronautical piston engine design. It’s a mono-objective genetic algorithm, which, starting from the given design specifications, finds the engine propulsive system configuration which possesses minimum mass while satisfying the geometrical, structural and performance constraints. The algorithm reads the project specifications as input data, namely the maximum values of crankshaft and propeller shaft speed and the maximal pressure value in the combustion chamber. The design variables bounds, that describe the solution domain from the geometrical point of view, are introduced too. In the Matlab® Optimization environment the objective function to be minimized is defined as the sum of the masses of the engine propulsive components. Each individual that is generated by the genetic algorithm is the assembly of the flywheel, the vibration damper and so many pistons, connecting rods, cranks, as the number of the cylinders. The fitness is evaluated for each individual of the population, then the rules of the genetic operators are applied, such as reproduction, mutation, selection, crossover. In the reproduction step the elitist method is applied, in order to save the fittest individuals from a contingent mutation and recombination disruption, making it undamaged survive until the next generation. Finally, as the best individual is found, the optimal dimensions values of the components are saved to an Excel® file, in order to build a CAD-automatic-3D-model for each component of the propulsive system, having a direct pre-visualization of the final product, still in the engine’s preliminary project design phase. With the purpose of showing the performance of the algorithm and validating this optimization method, an actual engine is taken, as a case study: it’s the 1900 JTD Fiat Avio, 4 cylinders, 4T, Diesel. Many verifications are made on the mechanical components of the engine, in order to test their feasibility and to decide their survival through generations. A system of inequalities is used to describe the non-linear relations between the design variables, and is used for components checking for static and dynamic loads configurations. The design variables geometrical boundaries are taken from actual engines data and similar design cases. Among the many simulations run for algorithm testing, twelve of them have been chosen as representative of the distribution of the individuals. Then, as an example, for each simulation, the corresponding 3D models of the crankshaft and the connecting rod, have been automatically built. In spite of morphological differences among the component the mass is almost the same. The results show a significant mass reduction (almost 20% for the crankshaft) in comparison to the original configuration, and an acceptable robustness of the method have been shown. The algorithm here developed is shown to be a valid method for an aeronautical-piston-engine preliminary project design optimization. In particular the procedure is able to analyze quite a wide range of design solutions, rejecting the ones that cannot fulfill the feasibility design specifications. This optimization algorithm could increase the aeronautical-piston-engine development, speeding up the production rate and joining modern computation performances and technological awareness to the long lasting traditional design experiences.
Resumo:
Änderungen in der Architektur von Polymeren abweichend von einer linearen Kette, beeinflussen deren physikalisch-chemisches Verhalten. Eine mögliche Architektur der verzweigten Moleküle stellen sternförmige Polymere dar. An einem zentralen Molekül als Kern, beispielsweise einem Dendrimer, sind an dessen Endpunkte lineare Polymerketten kovalent gebunden. In dieser Arbeit wurden zwei Problemstellungen behandelt. Zunächst wurde das Verhalten von Sternpolymeren aus Polybutadien in einer Matrix aus linearem Polybutadien mittels Neutronenkleinwinkelstreuung untersucht. Die Molekulargewichte der linaren Ketten wurden so gewählt, daî eines ein kleineres und das zweite ein größeres Molekulargewicht hat, als der leichteste bzw. schwerste Arm der verwendeten Sternpolymere. Neben den Parametern Armanzahl und -gewicht wurde die Konzentrations- und Temperaturabhängig durchgeführt. Die aus diesen Messungen extrahierten Parameter wurden mit den theoretischen Vorhersagen bezüglich des Skalenverhaltens vonSternpolymeren in derartigen Mischungen verglichen. Weiterhin wurde ein Interaktionsparameter bestimmt und in einzelne Anteile verschiedener Arten der Wechselwirkungen zerlegt. Die zweite Fragestellung betraf das Adsorptionsverhalten von Sternpolymeren im Vergleich mit linearen Polymeren. Es wurde die Kinetik der Adsorption mittels Ellipsometrie, die Strukturbildung mit dem Rasterkraftmikroskop und Streuung unter streifendem Einfalluntersucht.
Resumo:
This thesis deals with an investigation of Decomposition and Reformulation to solve Integer Linear Programming Problems. This method is often a very successful approach computationally, producing high-quality solutions for well-structured combinatorial optimization problems like vehicle routing, cutting stock, p-median and generalized assignment . However, until now the method has always been tailored to the specific problem under investigation. The principal innovation of this thesis is to develop a new framework able to apply this concept to a generic MIP problem. The new approach is thus capable of auto-decomposition and autoreformulation of the input problem applicable as a resolving black box algorithm and works as a complement and alternative to the normal resolving techniques. The idea of Decomposing and Reformulating (usually called in literature Dantzig and Wolfe Decomposition DWD) is, given a MIP, to convexify one (or more) subset(s) of constraints (slaves) and working on the partially convexified polyhedron(s) obtained. For a given MIP several decompositions can be defined depending from what sets of constraints we want to convexify. In this thesis we mainly reformulate MIPs using two sets of variables: the original variables and the extended variables (representing the exponential extreme points). The master constraints consist of the original constraints not included in any slaves plus the convexity constraint(s) and the linking constraints(ensuring that each original variable can be viewed as linear combination of extreme points of the slaves). The solution procedure consists of iteratively solving the reformulated MIP (master) and checking (pricing) if a variable of reduced costs exists, and in which case adding it to the master and solving it again (columns generation), or otherwise stopping the procedure. The advantage of using DWD is that the reformulated relaxation gives bounds stronger than the original LP relaxation, in addition it can be incorporated in a Branch and bound scheme (Branch and Price) in order to solve the problem to optimality. If the computational time for the pricing problem is reasonable this leads in practice to a stronger speed up in the solution time, specially when the convex hull of the slaves is easy to compute, usually because of its special structure.
Resumo:
English: The assessment of safety in existing bridges and viaducts led the Ministry of Public Works of the Netherlands to finance a specific campaing aimed at the study of the response of the elements of these infrastructures. Therefore, this activity is focused on the investigation of the behaviour of reinforced concrete slabs under concentrated loads, adopting finite element modeling and comparison with experimental results. These elements are characterized by shear behaviour and crisi, whose modeling is, from a computational point of view, a hard challeng, due to the brittle behavior combined with three-dimensional effects. The numerical modeling of the failure is studied through Sequentially Linear Analysis (SLA), an alternative Finite Element method, with respect to traditional incremental and iterative approaches. The comparison between the two different numerical techniques represents one of the first works and comparisons in a three-dimensional environment. It's carried out adopting one of the experimental test executed on reinforced concrete slabs as well. The advantage of the SLA is to avoid the well known problems of convergence of typical non-linear analysis, by directly specifying a damage increment, in terms of reduction of stiffness and resistance in particular finite element, instead of load or displacement increasing on the whole structure . For the first time, particular attention has been paid to specific aspects of the slabs, like an accurate constraints modeling and sensitivity of the solution with respect to the mesh density. This detailed analysis with respect to the main parameters proofed a strong influence of the tensile fracture energy, mesh density and chosen model on the solution in terms of force-displacement diagram, distribution of the crack patterns and shear failure mode. The SLA showed a great potential, but it requires a further developments for what regards two aspects of modeling: load conditions (constant and proportional loads) and softening behaviour of brittle materials (like concrete) in the three-dimensional field, in order to widen its horizons in these new contexts of study.
Resumo:
Membranen spielen eine essentielle Rolle bei vielen wichtigen zellulären Prozessen. Sie ermöglichen die Erzeugung von chemischen Gradienten zwischen dem Zellinneren und der Umgebung. Die Zellmembran übernimmt wesentliche Aufgaben bei der intra- und extrazellulären Signalweiterleitung und der Adhäsion an Oberflächen. Durch Prozesse wie Endozytose und Exozytose werden Stoffe in oder aus der Zelle transportiert, eingehüllt in Vesikel, welche aus der Zellmembran geformt werden. Zusätzlich bietet sie auch Schutz für das Zellinnere. Der Hauptbestandteil einer Zellmembran ist die Lipiddoppelschicht, eine zweidimensionale fluide Matrix mit einer heterogenen Zusammensetzung aus unterschiedlichen Lipiden. In dieser Matrix befinden sich weitere Bausteine, wie z.B. Proteine. An der Innenseite der Zelle ist die Membran über Ankerproteine an das Zytoskelett gekoppelt. Dieses Polymernetzwerk erhöht unter anderem die Stabilität, beeinflusst die Form der Zelle und übernimmt Funktionenrnbei der Zellbewegung. Zellmembranen sind keine homogenen Strukturen, je nach Funktion sind unterschiedliche Lipide und Proteine in mikrsokopischen Domänen angereichert.Um die grundlegenden mechanischen Eigenschaften der Zellmembran zu verstehen wurde im Rahmen dieser Arbeit das Modellsystem der porenüberspannenden Membranen verwendet.Die Entwicklung der porenüberspannenden Membranen ermöglicht die Untersuchung von mechanischen Eigenschaften von Membranen im mikro- bis nanoskopischen Bereich mit rasterkraftmikroskopischen Methoden. Hierbei bestimmen Porosität und Porengröße des Substrates die räumliche Auflösung, mit welcher die mechanischen Parameter untersucht werdenrnkönnen. Porenüberspannende Lipiddoppelschichten und Zellmembranen auf neuartigen porösen Siliziumsubstraten mit Porenradien von 225 nm bis 600 nm und Porositäten bis zu 30% wurden untersucht. Es wird ein Weg zu einer umfassenden theoretischen Modellierung der lokalen Indentationsexperimente und der Bestimmung der dominierenden energetischen Beiträge in der Mechanik von porenüberspannenden Membranen aufgezeigt. Porenüberspannende Membranen zeigen eine linear ansteigende Kraft mit zunehmender Indentationstiefe. Durch Untersuchung verschiedener Oberflächen, Porengrößen und Membranen unterschiedlicher Zusammensetzung war es für freistehende Lipiddoppelschichten möglich, den Einfluss der Oberflächeneigenschaften und Geometrie des Substrates, sowie der Membranphase und des Lösungsmittels auf die mechanischen Eigenschaften zu bestimmen. Es ist möglich, die experimentellen Daten mit einem theoretischen Modell zu beschreiben. Hierbei werden Parameter wie die laterale Spannung und das Biegemodul der Membran bestimmt. In Abhängigkeit der Substrateigenschaften wurden für freitragende Lipiddoppelschichten laterale Spannungen von 150 μN/m bis zu 31 mN/m gefunden für Biegemodulde zwischen 10^(−19) J bis 10^(−18) J. Durch Kraft-Indentations-Experimente an porenüberspannenden Zellmembranen wurde ein Vergleich zwischen dem Modell der freistehenden Lipiddoppelschichten und nativen Membranen herbeigeführt. Die lateralen Spannungen für native freitragende Membranen wurden zu 50 μN/m bestimmt. Weiterhin konnte der Einfluss des Zytoskeletts und der extrazellulä-rnren Matrix auf die mechanischen Eigenschaften bestimmt und innerhalb eines basolateralen Zellmembranfragments kartiert werden, wobei die Periodizität und der Porendurchmesser des Substrates das räumliche Auflösungsvermögen bestimmen. Durch Fixierung der freistehenden Zellmembran wurde das Biegemodul der Membran um bis zu einem Faktor 10 erhöht. Diese Arbeit zeigt wie lokal aufgelöste, mechanische Eigenschaften mittels des Modellsystems der porenüberspannenden Membranen gemessen und quantifiziert werden können. Weiterhin werden die dominierenden energetischen Einflüsse diskutiert, und eine Vergleichbarkeit zurnnatürlichen Membranen hergestellt.rn
Resumo:
In this work polymer brushes on both flat and curved substrates were prepared by grafting from and grafting to techniques. The brushes on flat substrates were patterned on the µm-scale with the use of an inkjet printer. Thus it was demonstrated that chemistry with an inkjet printer is feasible. The inkjet printer was used to deposit microdroplets of acid. The saponification of surface-immobilized ATRP initiators containing an ester bond occurred in these microdroplets. The changes in the monolayer of ester molecules due to saponification were amplified by SI-ATRP. It was possible to correlate the polymer brush thickness to effectiveness of saponification. The use of an inkjet printer allowed for simultaneously screening of parameters such as type of acid, concentration of acid, and contact time between acid and surface. A dip-coater was utilized in order to test the saponification independent of droplet evaporation. The advantage of this developed process is its versatility. It can be applied to all surface-immobilized initiators containing ester bonds. The technique has additionally been used to selectively defunctionalize the initiator molecules covering a microcantilever on one side of a cantilever. An asymmetric coating of the cantilever with polymer brushes was thus generated. An asymmetric coating allows the use of a microcantilever for sensing applications. The preparation of nanocomposites comprised of polyorganosiloxane microgel particles functionalized with poly(ethyl methacrylate) (PEMA) brushes and linear, but entangled, PEMA chains is described in the second major part of this thesis. Measurement of the interparticle distance was performed using scanning probe microscopy and grazing incidence small angle X-ray scattering. The matrix molecular weight at which the nanocomposite showed microphase separation was related to abrupt changes in inter-particle distance. Microphase separation occurred once the matrix molecular exceeded the molecular weight of the brushes. The trigger for the microphase separation was a contraction of the polymer brushes, as the measurements of inter-particle distance have revealed. The brushes became impenetrable for the matrix chains upon contraction and thus behaved as hard spheres. The contraction led to a loss of anchoring between particles and matrix, as shown by nanowear tests using an atomic force microscope. Polyorganosiloxane microgel particles were functionalized with 13C enriched poly(ethyl methacrylate) brushes. New synthetic pathways were developed in order to enrich not the entire brush with 13C, but only exclusively selected regions. 13C chemical shift anisotropy, an advanced NMR technique, can thus be used in order to gather information about the extended conformations in the 13C enriched regions of the PEMA chains immobilized on the µ-gel-g-PEMA particles. The third part of this thesis deals with the grafting to of polymeric fullerene materials on silicon substrates. Active ester chemistry was employed in order to prepare the polymeric fullerene materials and graft these materials covalently on amino-functionalized silicon substrates.rn
Resumo:
The Thermodynamic Bethe Ansatz analysis is carried out for the extended-CP^N class of integrable 2-dimensional Non-Linear Sigma Models related to the low energy limit of the AdS_4xCP^3 type IIA superstring theory. The principal aim of this program is to obtain further non-perturbative consistency check to the S-matrix proposed to describe the scattering processes between the fundamental excitations of the theory by analyzing the structure of the Renormalization Group flow. As a noteworthy byproduct we eventually obtain a novel class of TBA models which fits in the known classification but with several important differences. The TBA framework allows the evaluation of some exact quantities related to the conformal UV limit of the model: effective central charge, conformal dimension of the perturbing operator and field content of the underlying CFT. The knowledge of this physical quantities has led to the possibility of conjecturing a perturbed CFT realization of the integrable models in terms of coset Kac-Moody CFT. The set of numerical tools and programs developed ad hoc to solve the problem at hand is also discussed in some detail with references to the code.
Resumo:
The idea of balancing the resources spent in the acquisition and encoding of natural signals strictly to their intrinsic information content has interested nearly a decade of research under the name of compressed sensing. In this doctoral dissertation we develop some extensions and improvements upon this technique's foundations, by modifying the random sensing matrices on which the signals of interest are projected to achieve different objectives. Firstly, we propose two methods for the adaptation of sensing matrix ensembles to the second-order moments of natural signals. These techniques leverage the maximisation of different proxies for the quantity of information acquired by compressed sensing, and are efficiently applied in the encoding of electrocardiographic tracks with minimum-complexity digital hardware. Secondly, we focus on the possibility of using compressed sensing as a method to provide a partial, yet cryptanalysis-resistant form of encryption; in this context, we show how a random matrix generation strategy with a controlled amount of perturbations can be used to distinguish between multiple user classes with different quality of access to the encrypted information content. Finally, we explore the application of compressed sensing in the design of a multispectral imager, by implementing an optical scheme that entails a coded aperture array and Fabry-Pérot spectral filters. The signal recoveries obtained by processing real-world measurements show promising results, that leave room for an improvement of the sensing matrix calibration problem in the devised imager.
Resumo:
In food industry, quality assurance requires low cost methods for the rapid assessment of the parameters that affect product stability. Foodstuffs are complex in their structure, mainly composed by gaseous, liquid and solid phases which often coexist in the same product. Special attention is given to water, concerned as natural component of the major food product or as added ingredient of a production process. Particularly water is structurally present in the matrix and not completely available. In this way, water can be present in foodstuff in many different states: as water of crystallization, bound to protein or starch molecules, entrapped in biopolymer networks or adsorbed on solid surfaces of porous food particles. The traditional technique for the assessment of food quality give reliable information but are destructive, time consuming and unsuitable for on line application. The techniques proposed answer to the limited disposition of time and could be able to characterize the main compositional parameters. Dielectric interaction response is mainly related to water and could be useful not only to provide information on the total content but also on the degree of mobility of this ubiquitous molecule in different complex food matrix. In this way the proposal of this thesis is to answer at this need. Dielectric and electric tool can be used for the scope and led us to describe the complex food matrix and predict food characteristic. The thesis is structured in three main part, in the first one some theoretical tools are recalled to well assess the food parameter involved in the quality definition and the techniques able to reply at the problem emerged. The second part explains the research conducted and the experimental plans are illustrated in detail. Finally the last section is left for rapid method easily implementable in an industrial process.
Resumo:
Die vorliegende Arbeit behandelt die Entwicklung und Verbesserung von linear skalierenden Algorithmen für Elektronenstruktur basierte Molekulardynamik. Molekulardynamik ist eine Methode zur Computersimulation des komplexen Zusammenspiels zwischen Atomen und Molekülen bei endlicher Temperatur. Ein entscheidender Vorteil dieser Methode ist ihre hohe Genauigkeit und Vorhersagekraft. Allerdings verhindert der Rechenaufwand, welcher grundsätzlich kubisch mit der Anzahl der Atome skaliert, die Anwendung auf große Systeme und lange Zeitskalen. Ausgehend von einem neuen Formalismus, basierend auf dem großkanonischen Potential und einer Faktorisierung der Dichtematrix, wird die Diagonalisierung der entsprechenden Hamiltonmatrix vermieden. Dieser nutzt aus, dass die Hamilton- und die Dichtematrix aufgrund von Lokalisierung dünn besetzt sind. Das reduziert den Rechenaufwand so, dass er linear mit der Systemgröße skaliert. Um seine Effizienz zu demonstrieren, wird der daraus entstehende Algorithmus auf ein System mit flüssigem Methan angewandt, das extremem Druck (etwa 100 GPa) und extremer Temperatur (2000 - 8000 K) ausgesetzt ist. In der Simulation dissoziiert Methan bei Temperaturen oberhalb von 4000 K. Die Bildung von sp²-gebundenem polymerischen Kohlenstoff wird beobachtet. Die Simulationen liefern keinen Hinweis auf die Entstehung von Diamant und wirken sich daher auf die bisherigen Planetenmodelle von Neptun und Uranus aus. Da das Umgehen der Diagonalisierung der Hamiltonmatrix die Inversion von Matrizen mit sich bringt, wird zusätzlich das Problem behandelt, eine (inverse) p-te Wurzel einer gegebenen Matrix zu berechnen. Dies resultiert in einer neuen Formel für symmetrisch positiv definite Matrizen. Sie verallgemeinert die Newton-Schulz Iteration, Altmans Formel für beschränkte und nicht singuläre Operatoren und Newtons Methode zur Berechnung von Nullstellen von Funktionen. Der Nachweis wird erbracht, dass die Konvergenzordnung immer mindestens quadratisch ist und adaptives Anpassen eines Parameters q in allen Fällen zu besseren Ergebnissen führt.
Resumo:
The objective of this thesis is to investigate which contexts should be used for different kinds of note-taking and to study the evolution of the various types of note-taking. Moreover, the final aim of this thesis is to understand which method is used most commonly during the interpreting process, with a special focus on consecutive and community interpreting in the sector of public service and healthcare. The belief that stands behind this thesis is that the most complete method is Rozan’s, which is also the most theorized and used by interpreters. Through the analysis of the different rules of this practice, the importance of this method is shown. Moreover, the analysis demonstrates how these techniques can assist the interpreters in their jobs. This thesis starts from an overview of what note-taking means in the different settings of interpreting and a short history of note-taking is presented. The section that follows analyzes three different well-known types of note-taking methods outside the interpreting environment, that is: linear, non-linear and shorthand. Subsequent to the comparison, Rozan’s 7 principles are analyzed. To authenticate this thesis and the hypotheses herein, data was collected through a survey that was conducted on a sample of a group of graduated students in Linguistic and Intercultural Mediation at the University of Bologna “Scuola Superiore di Lingue Moderne per Interpreti e Traduttori”.
Resumo:
Computing the weighted geometric mean of large sparse matrices is an operation that tends to become rapidly intractable, when the size of the matrices involved grows. However, if we are not interested in the computation of the matrix function itself, but just in that of its product times a vector, the problem turns simpler and there is a chance to solve it even when the matrix mean would actually be impossible to compute. Our interest is motivated by the fact that this calculation has some practical applications, related to the preconditioning of some operators arising in domain decomposition of elliptic problems. In this thesis, we explore how such a computation can be efficiently performed. First, we exploit the properties of the weighted geometric mean and find several equivalent ways to express it through real powers of a matrix. Hence, we focus our attention on matrix powers and examine how well-known techniques can be adapted to the solution of the problem at hand. In particular, we consider two broad families of approaches for the computation of f(A) v, namely quadrature formulae and Krylov subspace methods, and generalize them to the pencil case f(A\B) v. Finally, we provide an extensive experimental evaluation of the proposed algorithms and also try to assess how convergence speed and execution time are influenced by some characteristics of the input matrices. Our results suggest that a few elements have some bearing on the performance and that, although there is no best choice in general, knowing the conditioning and the sparsity of the arguments beforehand can considerably help in choosing the best strategy to tackle the problem.
Resumo:
In cartilage repair, bioregenerative approaches using tissue engineering techniques have tried to achieve a close resemblance to hyaline cartilage, which might be visualized using advanced magnetic resonance imaging.
Resumo:
The self-regeneration capacity of articular cartilage is limited, due to its avascular and aneural nature. Loaded explants and cell cultures demonstrated that chondrocyte metabolism can be regulated via physiologic loading. However, the explicit ranges of mechanical stimuli that correspond to favourable metabolic response associated with extracellular matrix (ECM) synthesis are elusive. Unsystematic protocols lacking this knowledge produce inconsistent results. This study aims to determine the intrinsic ranges of physical stimuli that increase ECM synthesis and simultaneously inhibit nitric oxide (NO) production in chondrocyte-agarose constructs, by numerically re-evaluating the experiments performed by Tsuang et al. (2008). Twelve loading patterns were simulated with poro-elastic finite element models in ABAQUS. Pressure on solid matrix, von Mises stress, maximum principle stress and pore pressure were selected as intrinsic mechanical stimuli. Their development rates and magnitudes at the steady state of cyclic loading were calculated with MATLAB at the construct level. Concurrent increase in glycosaminoglycan and collagen was observed at 2300 Pa pressure and 40 Pa/s pressure rate. Between 0-1500 Pa and 0-40 Pa/s, NO production was consistently positive with respect to controls, whereas ECM synthesis was negative in the same range. A linear correlation was found between pressure rate and NO production (R = 0.77). Stress states identified in this study are generic and could be used to develop predictive algorithms for matrix production in agarose-chondrocyte constructs of arbitrary shape, size and agarose concentration. They could also be helpful to increase the efficacy of loading protocols for avascular tissue engineering. Copyright (c) 2010 John Wiley \& Sons, Ltd.
Resumo:
Conservation strategies for long-lived vertebrates require accurate estimates of parameters relative to the populations' size, numbers of non-breeding individuals (the “cryptic” fraction of the population) and the age structure. Frequently, visual survey techniques are used to make these estimates but the accuracy of these approaches is questionable, mainly because of the existence of numerous potential biases. Here we compare data on population trends and age structure in a bearded vulture (Gypaetus barbatus) population from visual surveys performed at supplementary feeding stations with data derived from population matrix-modelling approximations. Our results suggest that visual surveys overestimate the number of immature (<2 years old) birds, whereas subadults (3–5 y.o.) and adults (>6 y.o.) were underestimated in comparison with the predictions of a population model using a stable-age distribution. In addition, we found that visual surveys did not provide conclusive information on true variations in the size of the focal population. Our results suggest that although long-term studies (i.e. population matrix modelling based on capture-recapture procedures) are a more time-consuming method, they provide more reliable and robust estimates of population parameters needed in designing and applying conservation strategies. The findings shown here are likely transferable to the management and conservation of other long-lived vertebrate populations that share similar life-history traits and ecological requirements.