Meta-ethics as an empirical question
The fundamental question of meta-ethics, namely what is the nature of ethical judgements? Are they, in some sense real? is an empirical question. Namely, if independently derived intelligences converge on a set of ethical statements, then this will be evidence that these are real.
A good expression of this view comes from Ian Bank's Culture Series, namely The Hydrogen Sonata, what is called by the author, The Argument of Increasing Decency:
There was also the Argument of Increasing Decency, which basically held that cruelty was linked to stupidity and that the link between intelligence, imagination, empathy and good-behavior-as-it-was-generally-understood — i.e., not being cruel to others — was as profound as these matters ever got.
In fact, it is not this particular quote that best illustrates the argument, but Culture series as a whole as can be seen as a scify rendition of Fukuyama' End of History1.
A not-so-uncommon knee-jerk reaction to the fear of super-intelligence AI is to quip "well, if the AI does get so intelligent, then it will not be so aggressive." Frankly, I admire your dedication to realistic meta-ethics, but I am not sure we should bet our whole civilization on it.
I think most people are strong moral realists. Many who call themselves relativists turn out, with only modest probing, to be rigid realists, who don't even understand the question and think that the only dimensions along which variation is reasonable are relatively shallow cultural practices. Very few would go as far as to say that "paper-clipping the universe is a goal just as valid as trying to achieve a more egalitarian society where multiple people flourish".
If super-intelligent AIs, when they inevitably appear2, do share some moral intuitions with human scripture, then, I think we can say that meta-ethics is empirically solved and the realist side will have won. Otherwise, we'll all be dead and the whole question will be a bit irrelevant.
The author of the series would disagree, partially because Fukuyama includes a lightly regulated free market as part of his End of History, while the Culture Series attempts a more Marxist view of history, with communism being the pinacle of civilization. However, not only should we not trust authors too much when they discuss their own work (given that they are so likely to be biased), but implicity, the series agrees with E. O. Wilson's comment about Communism: Great system, wrong species (he meant that communism is great for ants, not for humans), as the Culture is run by powerful super-computers (Minds) and humans are, basically, pets (see this quote from Surface Detail: “Though drones, avatars and even humans are one thing; the loss of any is not without moral and diplomatic import, of course, but might be dismissed as merely unfortunate and regrettable, something to be smoothed over through the usual channels. Attacking a ship, on the other hand, is an unambiguous act of war.”). The books are also at their most insightful when they argue forcefully how the West (the Culture) will tremble a bit if faced with some violent religious fanatics, but it is actually more militaristic and less decadent than these religious lunatics believe (and that it itself thinks), so that, in the end, the Islamic State (represented by the Iridians) don't really stand a chance.↩
I am not predicting that this will happen any time soon. But I don't see why it shouldn't happen in the next few centuries, comparing our knowledge and technological abilities today with those of a millenium agao, it seems more reasonable to posit super-human AI by the year 3000 than to deny its possibility.↩