I think Cassius and Elli have put it just as I would.
The only things I have to add are that I prefer to dwell in reality and not imaginary hypotheticals like this, because the devil is in the details. Let's say that such a machine exists. Who designed it? How much do you trust both their motives and their skills-- is it really wise to hand over your sensory input to a machine which could be taken over by someone else and used to torture you? What if the machine breaks? And you can't access reality to extricate yourself? I do not think an Epicurean would typically accept a hypothetical where those things couldn't happen, because that takes the scenario out of reality as we know it. A decision would have no relevance to us or bearing on our real life philosophy.
Our ability to perceive through our senses is critical to being able to choose pleasure. In making this imaginary choice, a person typically tries to "double" themselves-- but they can't fully double. They can't really let go of the pre-machine condition of knowing that life would be going on without them-- that they wouldn't really be seeing their friends, only imagining it. That they would miss out on the pleasure of knowing they are really there for their friends-- the pleasure of _being_ a friend. That creates a pain in the imagination which can't be removed in the hypothetical. It has nothing to do with valuing something other than pleasure. It is an inability to believe there would not be a feeling of painful loss in the machine. A sort of anticipatory loss. And no matter how many times you reassure a person that they won't know they've lost reality, they can't imagine it. So a normal person will not choose the machine.