“The hypothesis that evolvability – the capacity to evolve by natural selection – is itself the object of natural selection is highly intriguing but remains controversial due in large part to a paucity of direct experimental evidence“ (Graves et al., 2013, p. e 1003766).
As Graves et al. states, the concept of Evolvability is controversial. The authors go on to say that there are two primary reasons for the debate: 1) evolvability, as the term has traditionally been used, addresses evolution at the population level and therefore must be subject to the comparatively weak forces that drive natural selection at that level; and 2) the concept of evolvability, it has been argued, would require foresight by natural selection, which most biological scientists disregard (myself included).
At the level of the gene, however, these arguments may lose ground. Not being a population geneticist, I won’t pretend to know what I’m talking about in regards to point #1, so I won’t even try. However, the idea that the selection for evolvability necessitates some sort of cognizant foresight appears to me to be sophistry, a reasoning which sounds convincing but is actually false.
Artwork by Ericailcane.
From my studies in genomics across different types of genes and different species, I suspect that evolvability can be selected for because 1) it provides an immediate neutral or positive advantage to a given gene; and 2) the sequences which often underlie dynamic evolvability, e.g., repetitive DNA such as transposons, segmental duplications, and other smaller copy numbers, have the potential to promote even further mutability in those same genes over time. In short, repeat sequences tend to breed even more repeat sequences. And some classes of genes appear to be able to take advantage of and work with this large repetitive content, which is most often housed within the intronic regions. However, some genes cannot maintain their necessary functions with large repetitive content, such as the housekeeping genes, and therefore have evolved to keep the numbers at a tight minimum.
My guess is that through most of evolutionary history, new repetitive content which was not immediately detrimental was instead initially neutral. Insertions, expansions, deletions, or inversions which provided some sort of positive adaptation may have arisen more slowly and were not immediately realized. Take for instance Alu element insertions which, when they do insert into an exon and become part of one of the gene’s untranslated regions (UTR), usually the element requires further mutation over the millenia to eventually become part of the protein-coding sequence, if, that is, it ever does (unpublished data). Many repetitive elements may also be exapted to serve as regulatory sequences, even if it is to prevent their own transcription.
That repetitive content, however, adds a certain level of instability to a gene and makes further expansions, deletions, inversions, or insertions even more likely, perpetuating a snowball cycle. For example, it is known that common fragile sites, the most unstable regions in the DNA, frequently house very large genes . These large genes, in turn, very often house extremely large transposable element content. Somehow, these genes still manage to function with such large intronic content, although I don’t believe it is currently understood how that occurs. Nevertheless, these genes seem to have adapted well enough.
I suspect, and am in a long process of investigating, that certain gene groups’ propensity towards mutation has been selected for, especially amongst genes which are closely related (I guess that’s a no-brainer), but also possibly amongst genes which share similar functions, such as tumor suppressors. Ultimately, I hypothesize that repetitive content has shaped the expansion and tissue-specification of certain genes, and subsequently the activity of those genes has driven further repetitive expansions. I guess time will tell whether that’s true or not. More to come later!