910 resultados para wot,iot,iot-system,digital-twin,framework,least-squares


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O modelo matemático de um sistema real permite o conhecimento do seu comportamento dinâmico e é geralmente utilizado em problemas de engenharia. Por vezes os parâmetros utilizados pelo modelo são desconhecidos ou imprecisos. O envelhecimento e o desgaste do material são fatores a ter em conta pois podem causar alterações no comportamento do sistema real, podendo ser necessário efetuar uma nova estimação dos seus parâmetros. Para resolver este problema é utilizado o software desenvolvido pela empresa MathWorks, nomeadamente, o Matlab e o Simulink, em conjunto com a plataforma Arduíno cujo Hardware é open-source. A partir de dados obtidos do sistema real será aplicado um Ajuste de curvas (Curve Fitting) pelo Método dos Mínimos Quadrados de forma a aproximar o modelo simulado ao modelo do sistema real. O sistema desenvolvido permite a obtenção de novos valores dos parâmetros, de uma forma simples e eficaz, com vista a uma melhor aproximação do sistema real em estudo. A solução encontrada é validada com recurso a diferentes sinais de entrada aplicados ao sistema e os seus resultados comparados com os resultados do novo modelo obtido. O desempenho da solução encontrada é avaliado através do método das somas quadráticas dos erros entre resultados obtidos através de simulação e resultados obtidos experimentalmente do sistema real.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays, reducing energy consumption is one of the highest priorities and biggest challenges faced worldwide and in particular in the industrial sector. Given the increasing trend of consumption and the current economical crisis, identifying cost reductions on the most energy-intensive sectors has become one of the main concerns among companies and researchers. Particularly in industrial environments, energy consumption is affected by several factors, namely production factors(e.g. equipments), human (e.g. operators experience), environmental (e.g. temperature), among others, which influence the way of how energy is used across the plant. Therefore, several approaches for identifying consumption causes have been suggested and discussed. However, the existing methods only provide guidelines for energy consumption and have shown difficulties in explaining certain energy consumption patterns due to the lack of structure to incorporate context influence, hence are not able to track down the causes of consumption to a process level, where optimization measures can actually take place. This dissertation proposes a new approach to tackle this issue, by on-line estimation of context-based energy consumption models, which are able to map operating context to consumption patterns. Context identification is performed by regression tree algorithms. Energy consumption estimation is achieved by means of a multi-model architecture using multiple RLS algorithms, locally estimated for each operating context. Lastly, the proposed approach is applied to a real cement plant grinding circuit. Experimental results prove the viability of the overall system, regarding both automatic context identification and energy consumption estimation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The following study aims to examine a controversial and relatively unexplored subject within our system: the legal framework on unfair business-to-consumer commercial practices. Given the fact that this subject is based on the Directive 2005/29/EC, we considered to be appropriate to explore, firstly, the background and origin of such normative instrument. Nevertheless, we have centered our analysis on the interpretation of the set rules established by the Portuguese legal system (Law nr 57/2008, March 26th). For this dissertation, we have proposed a model of tripartite approach. Chapter V seeks to shed light on the general clause by analyzing a set of open concepts such as professional diligence, honest market practice, good faith or material distortion of the consumer’s economic behavior. In chapter VI, we will focus on two common types of unfair commercial practices: misleading and aggressive practices. Finally, due to the fact that chapter VII deals with the black list, we have illustrated the listed practices by giving real life examples. Taking into account the indefinite concepts used in the general prohibition and in the misleading and aggressive clauses, it is particularly difficult to demonstrate the unfairness of the professional’s behavior. In the light of this information, we have concluded that the regime fails on achieving its main goal: it does not protect proper and effectively the consumer’s interests.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O estudo de caso tem por objetivo principal analisar e avaliar a utilização de um Ambiente de Aprendizagem Enriquecido pela Tecnologia (TELE) no Ensino Superior, através do que é normalmente designado de eLearning e, perceber, o impacto que estas metodologias estão a ter no ensino presencial, a forma como estão a ser usadas e de que forma alunos e professores têm sido confrontados com esta realidade. Especificamente visa analisar o impacto da implementação de um modelo de eLearning na aprendizagem e perceber a relação entre uma estratégia metodológica suportada pela LMS Moodle na sala de aula, as competências digitais e skills que os alunos têm e de que forma isso resulta em termos de ensino-aprendizagem. O Moodle foi a plataforma de aprendizagem selecionada enquanto suporte ao processo de ensino-aprendizagem na unidade curricular de Edição Multimédia do curso de Licenciatura em Comunicação Social e Cultural da Universidade Católica Portuguesa (UCP), com uma turma de 42 alunos no total. Por conseguinte, foi o ambiente usado para a interação entre os alunos e entre estes com o professor em espaço e tempo extra aula. Com o objetivo de cumprir os objetivos propostos recorreu-se a três instrumentos de recolha de dados: dois questionários aos alunos, em momentos distintos. Primeiro, procurou-se obter conhecimentos sobre as suas competências digitais e, num segundo momento, aferir sobre a perceção e o nível de satisfação dos alunos face ao modelo de aprendizagem implementado; observação não participante de sala de aula (estruturada e naturalista), delimitando-se as seguintes dimensões: estratégias operacionalizadas pelo professor, materiais/recursos e ferramentas utilizadas e práticas e atitudes do aluno; registos da plataforma pela análise das interações entre os alunos e destes com o professor através dos fóruns de discussão. O estudo permitiu atestar o impacto bastante positivo nos níveis de satisfação dos alunos e estabelecer uma relação eficaz entre a tecnologia e a aquisição de aprendizagens significativas: potenciou uma aprendizagem ativa, interativa e um contexto para o trabalho colaborativo; consequente capacidade autorregulatória da aprendizagem; promoveu o desenvolvimento da Literacia digital; possibilitou a adoção de metodologias de aprendizagem diversificadas; contribuiu para o aumento da participação, motivação e entusiasmo dos alunos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Madine Darby Canine Kidney (MDCK) cell lines have been extensively evaluated for their potential as host cells for influenza vaccine production. Recent studies allowed the cultivation of these cells in a fully defined medium and in suspension. However, reaching high cell densities in animal cell cultures still remains a challenge. To address this shortcoming, a combined methodology allied with knowledge from systems biology was reported to study the impact of the cell environment on the flux distribution. An optimization of the medium composition was proposed for both a batch and a continuous system in order to reach higher cell densities. To obtain insight into the metabolic activity of these cells, a detailed metabolic model previously developed by Wahl A. et. al was used. The experimental data of four cultivations of MDCK suspension cells, grown under different conditions and used in this work came from the Max Planck Institute, Magdeburg, Germany. Classical metabolic flux analysis (MFA) was used to estimate the intracellular flux distribution of each cultivation and then combined with partial least squares (PLS) method to establish a link between the estimated metabolic state and the cell environment. The validation of the MFA model was made and its consistency checked. The resulted PLS model explained almost 70% of the variance present in the flux distribution. The medium optimization for the continuous system and for the batch system resulted in higher biomass growth rates than the ones obtained experimentally, 0.034 h-1 and 0.030 h-1, respectively, thus reducing in almost 10 hours the duplication time. Additionally, the optimal medium obtained for the continuous system almost did not consider pyruvate. Overall the proposed methodology seems to be effective and both proposed medium optimizations seem to be promising to reach high cell densities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The following project introduces a model of Growth Hacking strategies for business-tobusiness Software-as-a-Service startups that was developed in collaboration with and applied to a Portuguese startup called Liquid. The work addresses digital marketing channels such as content marketing, email marketing, social marketing and selling. Further, the company’s product, pricing strategy, partnerships and website communication are examined. Applying best case practices, competitor benchmarks and interview insights from numerous industry influencers and experts, areas for improvement are deduced and procedures for each of those channels recommended.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Kernel-Functions, Machine Learning, Least Squares, Speech Recognition, Classification, Regression

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The framework presents how trading in the foreign commodity futures market and the forward exchange market can affect the optimal spot positions of domestic commodity producers and traders. It generalizes the models of Kawai and Zilcha (1986) and Kofman and Viaene (1991) to allow both intermediate and final commodities to be traded in the international and futures markets, and the exporters/importers to face production shock, domestic factor costs and a random price. Applying mean-variance expected utility, we find that a rise in the expected exchange rate can raise both supply and demand for commodities and reduce domestic prices if the exchange rate elasticity of supply is greater than that of demand. Whether higher volatilities of exchange rate and foreign futures price can reduce the optimal spot position of domestic traders depends on the correlation between the exchange rate and the foreign futures price. Even though the forward exchange market is unbiased, and there is no correlation between commodity prices and exchange rates, the exchange rate can still affect domestic trading and prices through offshore hedging and international trade if the traders are interested in their profit in domestic currency. It illustrates how the world prices and foreign futures prices of commodities and their volatility can be transmitted to the domestic market as well as the dynamic relationship between intermediate and final goods prices. The equilibrium prices depends on trader behaviour i.e. who trades or does not trade in the foreign commodity futures and domestic forward currency markets. The empirical result applying a two-stage-least-squares approach to Thai rice and rubber prices supports the theoretical result.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: In vitro aggregating brain cell cultures containing all types of brain cells have been shown to be useful for neurotoxicological investigations. The cultures are used for the detection of nervous system-specific effects of compounds by measuring multiple endpoints, including changes in enzyme activities. Concentration-dependent neurotoxicity is determined at several time points. METHODS: A Markov model was set up to describe the dynamics of brain cell populations exposed to potentially neurotoxic compounds. Brain cells were assumed to be either in a healthy or stressed state, with only stressed cells being susceptible to cell death. Cells may have switched between these states or died with concentration-dependent transition rates. Since cell numbers were not directly measurable, intracellular lactate dehydrogenase (LDH) activity was used as a surrogate. Assuming that changes in cell numbers are proportional to changes in intracellular LDH activity, stochastic enzyme activity models were derived. Maximum likelihood and least squares regression techniques were applied for estimation of the transition rates. Likelihood ratio tests were performed to test hypotheses about the transition rates. Simulation studies were used to investigate the performance of the transition rate estimators and to analyze the error rates of the likelihood ratio tests. The stochastic time-concentration activity model was applied to intracellular LDH activity measurements after 7 and 14 days of continuous exposure to propofol. The model describes transitions from healthy to stressed cells and from stressed cells to death. RESULTS: The model predicted that propofol would affect stressed cells more than healthy cells. Increasing propofol concentration from 10 to 100 μM reduced the mean waiting time for transition to the stressed state by 50%, from 14 to 7 days, whereas the mean duration to cellular death reduced more dramatically from 2.7 days to 6.5 hours. CONCLUSION: The proposed stochastic modeling approach can be used to discriminate between different biological hypotheses regarding the effect of a compound on the transition rates. The effects of different compounds on the transition rate estimates can be quantitatively compared. Data can be extrapolated at late measurement time points to investigate whether costs and time-consuming long-term experiments could possibly be eliminated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Inconsistencies about dynamic asymmetry between the on- and off-transient responses in VO2 are found in the literature. Therefore the purpose of this study was to examine VO2 on- and off-transients during moderate- and heavy-intensity cycling exercise in trained subjects. Ten men underwent an initial incremental test for the estimation of ventilatory threshold (VT) and, on different days, two bouts of square-wave exercise at moderate (<VT) and heavy (>VT) intensities. VO2 kinetics in exercise and recovery were better described by a single exponential model (<VT), or by a double exponential with two time delays (>VT). For moderate exercise, we found a symmetry of VO2 kinetics between the on- and off-transients (i.e., fundamental component), consistent with a system manifesting linear control dynamics. For heavy exercise, a slow component superimposed on the fundamental phase was expressed in both the exercise and recovery, with similar parameter estimates. But the on-transient values of the time constant were appreciably faster than the associated off-transient, and independent of the work rate imposed (<VT and >VT). Our results do not support a dynamically linear system model of VO2 during cycling exercise in the heavy-intensity domain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND Functional brain images such as Single-Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET) have been widely used to guide the clinicians in the Alzheimer's Disease (AD) diagnosis. However, the subjectivity involved in their evaluation has favoured the development of Computer Aided Diagnosis (CAD) Systems. METHODS It is proposed a novel combination of feature extraction techniques to improve the diagnosis of AD. Firstly, Regions of Interest (ROIs) are selected by means of a t-test carried out on 3D Normalised Mean Square Error (NMSE) features restricted to be located within a predefined brain activation mask. In order to address the small sample-size problem, the dimension of the feature space was further reduced by: Large Margin Nearest Neighbours using a rectangular matrix (LMNN-RECT), Principal Component Analysis (PCA) or Partial Least Squares (PLS) (the two latter also analysed with a LMNN transformation). Regarding the classifiers, kernel Support Vector Machines (SVMs) and LMNN using Euclidean, Mahalanobis and Energy-based metrics were compared. RESULTS Several experiments were conducted in order to evaluate the proposed LMNN-based feature extraction algorithms and its benefits as: i) linear transformation of the PLS or PCA reduced data, ii) feature reduction technique, and iii) classifier (with Euclidean, Mahalanobis or Energy-based methodology). The system was evaluated by means of k-fold cross-validation yielding accuracy, sensitivity and specificity values of 92.78%, 91.07% and 95.12% (for SPECT) and 90.67%, 88% and 93.33% (for PET), respectively, when a NMSE-PLS-LMNN feature extraction method was used in combination with a SVM classifier, thus outperforming recently reported baseline methods. CONCLUSIONS All the proposed methods turned out to be a valid solution for the presented problem. One of the advances is the robustness of the LMNN algorithm that not only provides higher separation rate between the classes but it also makes (in combination with NMSE and PLS) this rate variation more stable. In addition, their generalization ability is another advance since several experiments were performed on two image modalities (SPECT and PET).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lisdexamfetamine dimesylate (LDX) is a long-acting, prodrug stimulant therapy for patients with attention-deficit/hyperactivity disorder (ADHD). This randomized placebo-controlled trial of an optimized daily dose of LDX (30, 50 or 70 mg) was conducted in children and adolescents (aged 6-17 years) with ADHD. To evaluate the efficacy of LDX throughout the day, symptoms and behaviors of ADHD were evaluated using an abbreviated version of the Conners' Parent Rating Scale-Revised (CPRS-R) at 1000, 1400 and 1800 hours following early morning dosing (0700 hours). Osmotic-release oral system methylphenidate (OROS-MPH) was included as a reference treatment, but the study was not designed to support a statistical comparison between LDX and OROS-MPH. The full analysis set comprised 317 patients (LDX, n = 104; placebo, n = 106; OROS-MPH, n = 107). At baseline, CPRS-R total scores were similar across treatment groups. At endpoint, differences (active treatment - placebo) in least squares (LS) mean change from baseline CPRS-R total scores were statistically significant (P < 0.001) throughout the day for LDX (effect sizes: 1000 hours, 1.42; 1400 hours, 1.41; 1800 hours, 1.30) and OROS-MPH (effect sizes: 1000 hours, 1.04; 1400 hours, 0.98; 1800 hours, 0.92). Differences in LS mean change from baseline to endpoint were statistically significant (P < 0.001) for both active treatments in all four subscales of the CPRS-R (ADHD index, oppositional, hyperactivity and cognitive). In conclusion, improvements relative to placebo in ADHD-related symptoms and behaviors in children and adolescents receiving a single morning dose of LDX or OROS-MPH were maintained throughout the day and were ongoing at the last measurement in the evening (1800 hours).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Customer satisfaction and retention are key issues for organizations in today’s competitive market place. As such, much research and revenue has been invested in developing accurate ways of assessing consumer satisfaction at both the macro (national) and micro (organizational) level, facilitating comparisons in performance both within and between industries. Since the instigation of the national customer satisfaction indices (CSI), partial least squares (PLS) has been used to estimate the CSI models in preference to structural equation models (SEM) because they do not rely on strict assumptions about the data. However, this choice was based upon some misconceptions about the use of SEM’s and does not take into consideration more recent advances in SEM, including estimation methods that are robust to non-normality and missing data. In this paper, both SEM and PLS approaches were compared by evaluating perceptions of the Isle of Man Post Office Products and Customer service using a CSI format. The new robust SEM procedures were found to be advantageous over PLS. Product quality was found to be the only driver of customer satisfaction, while image and satisfaction were the only predictors of loyalty, thus arguing for the specificity of postal services

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The great majority of plant species in the tropics require animals to achieve pollination, but the exact role of floral signals in attraction of animal pollinators is often debated. Many plants provide a floral reward to attract a guild of pollinators, and it has been proposed that floral signals of non-rewarding species may converge on those of rewarding species to exploit the relationship of the latter with their pollinators. In the orchid family (Orchidaceae), pollination is almost universally animal-mediated, but a third of species provide no floral reward, which suggests that deceptive pollination mechanisms are prevalent. Here, we examine floral colour and shape convergence in Neotropical plant communities, focusing on certain food-deceptive Oncidiinae orchids (e.g. Trichocentrum ascendens and Oncidium nebulosum) and rewarding species of Malpighiaceae. We show that the species from these two distantly related families are often more similar in floral colour and shape than expected by chance and propose that a system of multifarious floral mimicry-a form of Batesian mimicry that involves multiple models and is more complex than a simple one model-one mimic system-operates in these orchids. The same mimetic pollination system has evolved at least 14 times within the species-rich Oncidiinae throughout the Neotropics. These results help explain the extraordinary diversification of Neotropical orchids and highlight the complexity of plant-animal interactions.