See if you can follow me here on what I see. I see curvilinearity, and it can be dissected.
Look at the 0-20, the 20-40 and the 40-60 portions. By portion I mean the narrow rectangle in each case that sits atop the rank-axis. For 0-20, the meter scores range from 125 to 180. For 20-40, they drop, ranging from 120 to 155. For 40-60, they drop again, from 108 to 155. These 3 blocks will produce a clear positive association between BB ranks and meter scores in the direction one would expect. Voters and meter are on the same page.
After that, the correlation disappears. The relation flattens and becomes horizontal, causing the curvi-linear appearance. Although the ranks are getting worse and worse as we move to the right, the meter scores remain stubbornly above 120, with the exception of a 110 score.
I suggest that this is an artifact of the ranks and also the voting, both the number of voters and how the votes could be cast. The ranks between 80 and 140 are NECESSARY. Films that are measured quite highly by the noir meter HAVE to be pushed into worse ranks, because that's all there is after exhausting votes on the higher ranked films.
My suggestion in the prior post of converting the meter scores to ranks won't solve this problem. It'll make the correlation worse. The differences in ranks as provided by the votes and the meter are not really commensurable.
The data suggest the following. Below some voting rank like 60 down to nearly 140, the films there (of which there are about 27) all have about the same quality as measured by the meter (typically 120 to 140), quite high scores, which if they have merit suggests that the worse rankings are an artifact that something has to be rated worse by the voting. The other possibility is that the meter calls a noir a noir at the lower rungs of the ladder but inadequately assesses quality at these ranks. In other words, do we believe the meter or the ranks produced by the voting?
Above some rank like 60 and all the way to 1, the noir-metered score does tend to rise with rank.
This is interesting work. I hope you can turn up some other ranking methods to use the meter against. For example, the IMDb score, or perhaps the scores by some subset of its demographic breakdown. You can always try Maltin's scale which has 7 ranks. You could see if the average meter score within each of the 7 classes rises with Maltin's average. (4, 3.5, 3, 2.5, 2, 1.5 and BOMB) are the 7.