11 resultados para Philosophy of the art

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since the birth of the European Union on 1957, the development of a single market through the integration of national freight transport networks has been one of the most important points in the European Union agenda. Increasingly congested motorways, rising oil prices and concerns about environment and climate change require the optimization of transport systems and transport processes. The best solution should be the intermodal transport, in which the most efficient transport options are used for the different legs of transport. This thesis examines the problem of defining innovative strategies and procedures for the sustainable development of intermodal freight transport in Europe. In particular, the role of maritime transport and railway transport in the intermodal chain are examined in depth, as these modes are recognized to be environmentally friendly and energy efficient. Maritime transport is the only mode that has kept pace with the fast growth in road transport, but it is necessary to promote the full exploitation of it by involving short sea shipping as an integrated service in the intermodal door-to-door supply chain and by improving port accessibility. The role of Motorways of the Sea services as part of the Trans-European Transport Network is is taken into account: a picture of the European policy and a state of the art of the Italian Motorways of the Sea system are reported. Afterwards, the focus shifts from line to node problems: the role of intermodal railway terminals in the transport chain is discussed. In particular, the last mile process is taken into account, as it is crucial in order to exploit the full capacity of an intermodal terminal. The difference between the present last mile planning models of Bologna Interporto and Verona Quadrante Europa is described and discussed. Finally, a new approach to railway intermodal terminal planning and management is introduced, by describing the case of "Terminal Gate" at Verona Quadrante Europa. Some proposals to favour the integrate management of "Terminal Gate" and the allocation of its capacity are drawn up.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The treatment of the Cerebral Palsy (CP) is considered as the “core problem” for the whole field of the pediatric rehabilitation. The reason why this pathology has such a primary role, can be ascribed to two main aspects. First of all CP is the form of disability most frequent in childhood (one new case per 500 birth alive, (1)), secondarily the functional recovery of the “spastic” child is, historically, the clinical field in which the majority of the therapeutic methods and techniques (physiotherapy, orthotic, pharmacologic, orthopedic-surgical, neurosurgical) were first applied and tested. The currently accepted definition of CP – Group of disorders of the development of movement and posture causing activity limitation (2) – is the result of a recent update by the World Health Organization to the language of the International Classification of Functioning Disability and Health, from the original proposal of Ingram – A persistent but not unchangeable disorder of posture and movement – dated 1955 (3). This definition considers CP as a permanent ailment, i.e. a “fixed” condition, that however can be modified both functionally and structurally by means of child spontaneous evolution and treatments carried out during childhood. The lesion that causes the palsy, happens in a structurally immature brain in the pre-, peri- or post-birth period (but only during the firsts months of life). The most frequent causes of CP are: prematurity, insufficient cerebral perfusion, arterial haemorrhage, venous infarction, hypoxia caused by various origin (for example from the ingestion of amniotic liquid), malnutrition, infection and maternal or fetal poisoning. In addition to these causes, traumas and malformations have to be included. The lesion, whether focused or spread over the nervous system, impairs the whole functioning of the Central Nervous System (CNS). As a consequence, they affect the construction of the adaptive functions (4), first of all posture control, locomotion and manipulation. The palsy itself does not vary over time, however it assumes an unavoidable “evolutionary” feature when during growth the child is requested to meet new and different needs through the construction of new and different functions. It is essential to consider that clinically CP is not only a direct expression of structural impairment, that is of etiology, pathogenesis and lesion timing, but it is mainly the manifestation of the path followed by the CNS to “re”-construct the adaptive functions “despite” the presence of the damage. “Palsy” is “the form of the function that is implemented by an individual whose CNS has been damaged in order to satisfy the demands coming from the environment” (4). Therefore it is only possible to establish general relations between lesion site, nature and size, and palsy and recovery processes. It is quite common to observe that children with very similar neuroimaging can have very different clinical manifestations of CP and, on the other hand, children with very similar motor behaviors can have completely different lesion histories. A very clear example of this is represented by hemiplegic forms, which show bilateral hemispheric lesions in a high percentage of cases. The first section of this thesis is aimed at guiding the interpretation of CP. First of all the issue of the detection of the palsy is treated from historical viewpoint. Consequently, an extended analysis of the current definition of CP, as internationally accepted, is provided. The definition is then outlined in terms of a space dimension and then of a time dimension, hence it is highlighted where this definition is unacceptably lacking. The last part of the first section further stresses the importance of shifting from the traditional concept of CP as a palsy of development (defect analysis) towards the notion of development of palsy, i.e., as the product of the relationship that the individual however tries to dynamically build with the surrounding environment (resource semeiotics) starting and growing from a different availability of resources, needs, dreams, rights and duties (4). In the scientific and clinic community no common classification system of CP has so far been universally accepted. Besides, no standard operative method or technique have been acknowledged to effectively assess the different disabilities and impairments exhibited by children with CP. CP is still “an artificial concept, comprising several causes and clinical syndromes that have been grouped together for a convenience of management” (5). The lack of standard and common protocols able to effectively diagnose the palsy, and as a consequence to establish specific treatments and prognosis, is mainly because of the difficulty to elevate this field to a level based on scientific evidence. A solution aimed at overcoming the current incomplete treatment of CP children is represented by the clinical systematic adoption of objective tools able to measure motor defects and movement impairments. A widespread application of reliable instruments and techniques able to objectively evaluate both the form of the palsy (diagnosis) and the efficacy of the treatments provided (prognosis), constitutes a valuable method able to validate care protocols, establish the efficacy of classification systems and assess the validity of definitions. Since the ‘80s, instruments specifically oriented to the analysis of the human movement have been advantageously designed and applied in the context of CP with the aim of measuring motor deficits and, especially, gait deviations. The gait analysis (GA) technique has been increasingly used over the years to assess, analyze, classify, and support the process of clinical decisions making, allowing for a complete investigation of gait with an increased temporal and spatial resolution. GA has provided a basis for improving the outcome of surgical and nonsurgical treatments and for introducing a new modus operandi in the identification of defects and functional adaptations to the musculoskeletal disorders. Historically, the first laboratories set up for gait analysis developed their own protocol (set of procedures for data collection and for data reduction) independently, according to performances of the technologies available at that time. In particular, the stereophotogrammetric systems mainly based on optoelectronic technology, soon became a gold-standard for motion analysis. They have been successfully applied especially for scientific purposes. Nowadays the optoelectronic systems have significantly improved their performances in term of spatial and temporal resolution, however many laboratories continue to use the protocols designed on the technology available in the ‘70s and now out-of-date. Furthermore, these protocols are not coherent both for the biomechanical models and for the adopted collection procedures. In spite of these differences, GA data are shared, exchanged and interpreted irrespectively to the adopted protocol without a full awareness to what extent these protocols are compatible and comparable with each other. Following the extraordinary advances in computer science and electronics, new systems for GA no longer based on optoelectronic technology, are now becoming available. They are the Inertial and Magnetic Measurement Systems (IMMSs), based on miniature MEMS (Microelectromechanical systems) inertial sensor technology. These systems are cost effective, wearable and fully portable motion analysis systems, these features gives IMMSs the potential to be used both outside specialized laboratories and to consecutive collect series of tens of gait cycles. The recognition and selection of the most representative gait cycle is then easier and more reliable especially in CP children, considering their relevant gait cycle variability. The second section of this thesis is focused on GA. In particular, it is firstly aimed at examining the differences among five most representative GA protocols in order to assess the state of the art with respect to the inter-protocol variability. The design of a new protocol is then proposed and presented with the aim of achieving gait analysis on CP children by means of IMMS. The protocol, named ‘Outwalk’, contains original and innovative solutions oriented at obtaining joint kinematic with calibration procedures extremely comfortable for the patients. The results of a first in-vivo validation of Outwalk on healthy subjects are then provided. In particular, this study was carried out by comparing Outwalk used in combination with an IMMS with respect to a reference protocol and an optoelectronic system. In order to set a more accurate and precise comparison of the systems and the protocols, ad hoc methods were designed and an original formulation of the statistical parameter coefficient of multiple correlation was developed and effectively applied. On the basis of the experimental design proposed for the validation on healthy subjects, a first assessment of Outwalk, together with an IMMS, was also carried out on CP children. The third section of this thesis is dedicated to the treatment of walking in CP children. Commonly prescribed treatments in addressing gait abnormalities in CP children include physical therapy, surgery (orthopedic and rhizotomy), and orthoses. The orthotic approach is conservative, being reversible, and widespread in many therapeutic regimes. Orthoses are used to improve the gait of children with CP, by preventing deformities, controlling joint position, and offering an effective lever for the ankle joint. Orthoses are prescribed for the additional aims of increasing walking speed, improving stability, preventing stumbling, and decreasing muscular fatigue. The ankle-foot orthosis (AFO), with a rigid ankle, are primarily designed to prevent equinus and other foot deformities with a positive effect also on more proximal joints. However, AFOs prevent the natural excursion of the tibio-tarsic joint during the second rocker, hence hampering the natural leaning progression of the whole body under the effect of the inertia (6). A new modular (submalleolar) astragalus-calcanear orthosis, named OMAC, has recently been proposed with the intention of substituting the prescription of AFOs in those CP children exhibiting a flat and valgus-pronated foot. The aim of this section is thus to present the mechanical and technical features of the OMAC by means of an accurate description of the device. In particular, the integral document of the deposited Italian patent, is provided. A preliminary validation of OMAC with respect to AFO is also reported as resulted from an experimental campaign on diplegic CP children, during a three month period, aimed at quantitatively assessing the benefit provided by the two orthoses on walking and at qualitatively evaluating the changes in the quality of life and motor abilities. As already stated, CP is universally considered as a persistent but not unchangeable disorder of posture and movement. Conversely to this definition, some clinicians (4) have recently pointed out that movement disorders may be primarily caused by the presence of perceptive disorders, where perception is not merely the acquisition of sensory information, but an active process aimed at guiding the execution of movements through the integration of sensory information properly representing the state of one’s body and of the environment. Children with perceptive impairments show an overall fear of moving and the onset of strongly unnatural walking schemes directly caused by the presence of perceptive system disorders. The fourth section of the thesis thus deals with accurately defining the perceptive impairment exhibited by diplegic CP children. A detailed description of the clinical signs revealing the presence of the perceptive impairment, and a classification scheme of the clinical aspects of perceptual disorders is provided. In the end, a functional reaching test is proposed as an instrumental test able to disclosure the perceptive impairment. References 1. Prevalence and characteristics of children with cerebral palsy in Europe. Dev Med Child Neurol. 2002 Set;44(9):633-640. 2. Bax M, Goldstein M, Rosenbaum P, Leviton A, Paneth N, Dan B, et al. Proposed definition and classification of cerebral palsy, April 2005. Dev Med Child Neurol. 2005 Ago;47(8):571-576. 3. Ingram TT. A study of cerebral palsy in the childhood population of Edinburgh. Arch. Dis. Child. 1955 Apr;30(150):85-98. 4. Ferrari A, Cioni G. The spastic forms of cerebral palsy : a guide to the assessment of adaptive functions. Milan: Springer; 2009. 5. Olney SJ, Wright MJ. Cerebral Palsy. Campbell S et al. Physical Therapy for Children. 2nd Ed. Philadelphia: Saunders. 2000;:533-570. 6. Desloovere K, Molenaers G, Van Gestel L, Huenaerts C, Van Campenhout A, Callewaert B, et al. How can push-off be preserved during use of an ankle foot orthosis in children with hemiplegia? A prospective controlled study. Gait Posture. 2006 Ott;24(2):142-151.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Interactive theorem provers are tools designed for the certification of formal proofs developed by means of man-machine collaboration. Formal proofs obtained in this way cover a large variety of logical theories, ranging from the branches of mainstream mathematics, to the field of software verification. The border between these two worlds is marked by results in theoretical computer science and proofs related to the metatheory of programming languages. This last field, which is an obvious application of interactive theorem proving, poses nonetheless a serious challenge to the users of such tools, due both to the particularly structured way in which these proofs are constructed, and to difficulties related to the management of notions typical of programming languages like variable binding. This thesis is composed of two parts, discussing our experience in the development of the Matita interactive theorem prover and its use in the mechanization of the metatheory of programming languages. More specifically, part I covers: - the results of our effort in providing a better framework for the development of tactics for Matita, in order to make their implementation and debugging easier, also resulting in a much clearer code; - a discussion of the implementation of two tactics, providing infrastructure for the unification of constructor forms and the inversion of inductive predicates; we point out interactions between induction and inversion and provide an advancement over the state of the art. In the second part of the thesis, we focus on aspects related to the formalization of programming languages. We describe two works of ours: - a discussion of basic issues we encountered in our formalizations of part 1A of the Poplmark challenge, where we apply the extended inversion principles we implemented for Matita; - a formalization of an algebraic logical framework, posing more complex challenges, including multiple binding and a form of hereditary substitution; this work adopts, for the encoding of binding, an extension of Masahiko Sato's canonical locally named representation we designed during our visit to the Laboratory for Foundations of Computer Science at the University of Edinburgh, under the supervision of Randy Pollack.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The determination of skeletal loading conditions in vivo and their relationship to the health of bone tissues, remain an open question. Computational modeling of the musculoskeletal system is the only practicable method providing a valuable approach to muscle and joint loading analyses, although crucial shortcomings limit the translation process of computational methods into the orthopedic and neurological practice. A growing attention focused on subject-specific modeling, particularly when pathological musculoskeletal conditions need to be studied. Nevertheless, subject-specific data cannot be always collected in the research and clinical practice, and there is a lack of efficient methods and frameworks for building models and incorporating them in simulations of motion. The overall aim of the present PhD thesis was to introduce improvements to the state-of-the-art musculoskeletal modeling for the prediction of physiological muscle and joint loads during motion. A threefold goal was articulated as follows: (i) develop state-of-the art subject-specific models and analyze skeletal load predictions; (ii) analyze the sensitivity of model predictions to relevant musculotendon model parameters and kinematic uncertainties; (iii) design an efficient software framework simplifying the effort-intensive phases of subject-specific modeling pre-processing. The first goal underlined the relevance of subject-specific musculoskeletal modeling to determine physiological skeletal loads during gait, corroborating the choice of full subject-specific modeling for the analyses of pathological conditions. The second goal characterized the sensitivity of skeletal load predictions to major musculotendon parameters and kinematic uncertainties, and robust probabilistic methods were applied for methodological and clinical purposes. The last goal created an efficient software framework for subject-specific modeling and simulation, which is practical, user friendly and effort effective. Future research development aims at the implementation of more accurate models describing lower-limb joint mechanics and musculotendon paths, and the assessment of an overall scenario of the crucial model parameters affecting the skeletal load predictions through probabilistic modeling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A possible future scenario for the water injection (WI) application has been explored as an advanced strategy for modern GDI engines. The aim is to verify whether the PWI (Port Water Injection) and DWI (Direct Water Injection) architectures can replace current fuel enrichment strategies to limit turbine inlet temperatures (TiT) and knock engine attitude. In this way, it might be possible to extend the stoichiometric mixture condition over the entire engine map, meeting possible future restrictions in the use of AES (Auxiliary Emission Strategies) and future emission limitations. The research was first addressed through a comprehensive assessment of the state-of-the-art of the technology and the main effects of the chemical-physical water properties. Then, detailed chemical kinetics simulations were performed in order to compute the effects of WI on combustion development and auto-ignition. The latter represents an important methodology step for accurate numerical combustion simulations. The water injection was then analysed in detail for a PWI system, through an experimental campaign for macroscopic and microscopic injector characterization inside a test chamber. The collected data were used to perform a numerical validation of the spray models, obtaining an excellent matching in terms of particle size and droplet velocity distributions. Finally, a wide range of three-dimensional CFD simulations of a virtual high-bmep engine were realized and compared, exploring also different engine designs and water/fuel injection strategies under non-reacting and reacting flow conditions. According to the latter, it was found that thanks to the introduction of water, for both PWI and DWI systems, it could be possible to obtain an increase of the target performance and an optimization of the bsfc (Break Specific Fuel Consumption), lowering the engine knock risk at the same time, while the TiT target has been achieved hardly only for one DWI configuration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Polymerases and nucleases are enzymes processing DNA and RNA. They are involved in crucial processes for cell life by performing the extension and the cleavage of nucleic acid chains during genome replication and maintenance. Additionally, both enzymes are often associated to several diseases, including cancer. In order to catalyze the reaction, most of them operate via the two-metal-ion mechanism. For this, despite showing relevant differences in structure, function and catalytic properties, they share common catalytic elements, which comprise the two catalytic ions and their first-shell acidic residues. Notably, recent studies of different metalloenzymes revealed the recurrent presence of additional elements surrounding the active site, thus suggesting an extended two-metal-ion-centered architecture. However, whether these elements have a catalytic function and what is their role is still unclear. In this work, using state-of-the-art computational techniques, second- and third-shell elements are showed to act in metallonucleases favoring the substrate positioning and leaving group release. In particular, in hExo1 a transient third metal ion is recruited and positioned near the two-metal-ion site by a structurally conserved acidic residue to assist the leaving group departure. Similarly, in hFEN1 second- and third-shell Arg/Lys residues operate the phosphate steering mechanism through (i) substrate recruitment, (ii) precise cleavage localization, and (iii) leaving group release. Importantly, structural comparisons of hExo1, hFEN1 and other metallonucleases suggest that similar catalytic mechanisms may be shared by other enzymes. Overall, the results obtained provide an extended vision on parallel strategies adopted by metalloenzymes, which employ divalent metal ion or positively charged residues to ensure efficient and specific catalysis. Furthermore, these outcomes may have implications for de novo enzyme engineering and/or drug design to modulate nucleic acid processing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis, a thorough investigation on acoustic noise control systems for realistic automotive scenarios is presented. The thesis is organized in two parts dealing with the main topics treated: Active Noise Control (ANC) systems and Virtual Microphone Technique (VMT), respectively. The technology of ANC allows to increase the driver's/passenger's comfort and safety exploiting the principle of mitigating the disturbing acoustic noise by the superposition of a secondary sound wave of equal amplitude but opposite phase. Performance analyses of both FeedForwrd (FF) and FeedBack (FB) ANC systems, in experimental scenarios, are presented. Since, environmental vibration noises within a car cabin are time-varying, most of the ANC solutions are adaptive. However, in this work, an effective fixed FB ANC system is proposed. Various ANC schemes are considered and compared with each other. In order to find the best possible ANC configuration which optimizes the performance in terms of disturbing noise attenuation, a thorough research of \gls{KPI}, system parameters and experimental setups design, is carried out. In the second part of this thesis, VMT, based on the estimation of specific acoustic channels, is investigated with the aim of generating a quiet acoustic zone around a confined area, e.g., the driver's ears. Performance analysis and comparison of various estimation approaches is presented. Several measurement campaigns were performed in order to acquire a sufficient duration and number of microphone signals in a significant variety of driving scenarios and employed cars. To do this, different experimental setups were designed and their performance compared. Design guidelines are given to obtain good trade-off between accuracy performance and equipment costs. Finally, a preliminary analysis with an innovative approach based on Neural Networks (NNs) to improve the current state of the art in microphone virtualization is proposed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Embedding intelligence in extreme edge devices allows distilling raw data acquired from sensors into actionable information, directly on IoT end-nodes. This computing paradigm, in which end-nodes no longer depend entirely on the Cloud, offers undeniable benefits, driving a large research area (TinyML) to deploy leading Machine Learning (ML) algorithms on micro-controller class of devices. To fit the limited memory storage capability of these tiny platforms, full-precision Deep Neural Networks (DNNs) are compressed by representing their data down to byte and sub-byte formats, in the integer domain. However, the current generation of micro-controller systems can barely cope with the computing requirements of QNNs. This thesis tackles the challenge from many perspectives, presenting solutions both at software and hardware levels, exploiting parallelism, heterogeneity and software programmability to guarantee high flexibility and high energy-performance proportionality. The first contribution, PULP-NN, is an optimized software computing library for QNN inference on parallel ultra-low-power (PULP) clusters of RISC-V processors, showing one order of magnitude improvements in performance and energy efficiency, compared to current State-of-the-Art (SoA) STM32 micro-controller systems (MCUs) based on ARM Cortex-M cores. The second contribution is XpulpNN, a set of RISC-V domain specific instruction set architecture (ISA) extensions to deal with sub-byte integer arithmetic computation. The solution, including the ISA extensions and the micro-architecture to support them, achieves energy efficiency comparable with dedicated DNN accelerators and surpasses the efficiency of SoA ARM Cortex-M based MCUs, such as the low-end STM32M4 and the high-end STM32H7 devices, by up to three orders of magnitude. To overcome the Von Neumann bottleneck while guaranteeing the highest flexibility, the final contribution integrates an Analog In-Memory Computing accelerator into the PULP cluster, creating a fully programmable heterogeneous fabric that demonstrates end-to-end inference capabilities of SoA MobileNetV2 models, showing two orders of magnitude performance improvements over current SoA analog/digital solutions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The way we live has revealed a lot about the choices made in the last decades. These choices are mostly based on a predatory socioeconomic structure, based on the pillars of anthropocentrism and inconsistent with the principles of global sustainability. This structure based on fossil fuels degrades the environment and directly and indirectly impacts the biomes. According to The International Energy Agency (2020), the sector was responsible for more than a third of global energy consumption and 40% of total GHG emissions into the atmosphere (directly and indirectly). This thesis presents the main effects of climate change observed in the built environment and at the urban territorial scale, through a review of the state of the art of the subject in the last decade (2010-2021). The thesis breaks down the projectual process seeking to identify how the architect and urban planner can mitigate the effects of climate change, adapting existing structures or in projects, and also promoting the expansion of the resilience of these building systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis describes the development of the Sample Fetch Rover (SFR), studied for Mars Sample Return (MSR), an international campaign carried out in cooperation between the National Aeronautics and Space Administration (NASA) and the European Space Agency (ESA). The focus of this document is the design of the electro-mechanical systems of the rover. After placing this work into the general context of robotic planetary exploration and summarising the state of the art for what concerns Mars rovers, the architecture of the Mars Sample Return Campaign is presented. A complete overview of the current SFR architecture is provided, touching upon all the main subsystems of the spacecraft. For each area, it is discussed what are the design drivers, the chosen solutions and whether they use heritage technology (in particular from the ExoMars Rover) or new developments. This research focuses on two topics of particular interest, due to their relevance for the mission and the novelty of their design: locomotion and sample acquisition, which are discussed in depth. The early SFR locomotion concepts are summarised, covering the initial trade-offs and discarded designs for higher traverse performance. Once a consolidated architecture was reached, the locomotion subsystem was developed further, defining the details of the suspension, actuators, deployment mechanisms and wheels. This technology is presented here in detail, including some key analysis and test results that support the design and demonstrate how it responds to the mission requirements. Another major electro-mechanical system developed as part of this work is the one dedicated to sample tube acquisition. The concept of operations of this machinery was defined to be robust against the unknown conditions that characterise the mission. The design process led to a highly automated robotic system which is described here in its main components: vision system, robotic arm and tube storage.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the present work we perform an econometric analysis of the Tribal art market. To this aim, we use a unique and original database that includes information on Tribal art market auctions worldwide from 1998 to 2011. In Literature, art prices are modelled through the hedonic regression model, a classic fixed-effect model. The main drawback of the hedonic approach is the large number of parameters, since, in general, art data include many categorical variables. In this work, we propose a multilevel model for the analysis of Tribal art prices that takes into account the influence of time on artwork prices. In fact, it is natural to assume that time exerts an influence over the price dynamics in various ways. Nevertheless, since the set of objects change at every auction date, we do not have repeated measurements of the same items over time. Hence, the dataset does not constitute a proper panel; rather, it has a two-level structure in that items, level-1 units, are grouped in time points, level-2 units. The main theoretical contribution is the extension of classical multilevel models to cope with the case described above. In particular, we introduce a model with time dependent random effects at the second level. We propose a novel specification of the model, derive the maximum likelihood estimators and implement them through the E-M algorithm. We test the finite sample properties of the estimators and the validity of the own-written R-code by means of a simulation study. Finally, we show that the new model improves considerably the fit of the Tribal art data with respect to both the hedonic regression model and the classic multilevel model.