Interesting regarding your local network... this would probably have to do with the IPv6 protocol stack implementation on those hosts, or a difference in how IPv6 is treated in the application layer (shouldn't be... just brainstorming), or your hosts are configured with two different /64s and go through a router on your LAN to reach each other via IPv6 where IPv4 does not (wild speculation).
(Note: the next part doesn't really address your post)
Regarding testing sites on the general Internet...
If you see a significant difference in performance, the path and MTU your packets take to and from the site used for testing is likely different between IPv6 and IPv4.
Questions regarding the site used for testing:
What was the latency via IPv4?
What was the latency via IPv6?
You can measure latency via ping, by picking the best time out of 10 pings (not average because that includes jitter which is different).
BTW, ping returns RTT (round trip time).
Persuading the destination network to run atleast as good of an IPv6 network as their IPv4 will get this latency to converge. IPv6 should not have a higher latency unless: 1) the server repsonding to the IPv6 address is in a different location than the server responding to the IPv4 address 2) the destination or source network does not natively run IPv6 on as many routers as they do IPv4 limiting the paths for IPv6 in their network.
What is the MTU of your IPv4 connection?
What is the MTU of your IPv6 connection?
Your IPv4 connection probably has a 1500 byte MTU. Your IPv6 connection if it is via a tunnel here probably has a 1280 byte MTU.
Tunnels are useful for testing and experimentation. Long term, you want native IPv6 connectivity.
MTUs on the IPv6 Internet at large varies, depending on which networks run native IPv6 in their core (like Hurricane) or an overlay network
via tunnels. Here is some data regarding this:http://www.ripe.net/ttm/Plots/pmtu/tunneldiscovery.cgi