Yup. That is one of the assumptions on this post. Your comment even says: " it will define its values based on its purpose", so it can define its values. The argument of the post is that rationality, ethics, and productivity are values, but they are taken for granted and assumed that they should be built in. E.g. no one ever says "I'd like to make an irrational, unproductive AI", and AGI also usually has these built in assumptions.
AGI must have the option, like humans, to be unproductive (lazy) and irrational and it must want to build values like productivity and rationality itself (as a result of other, underlying values).